Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Residuals, general least squares

The sum of squares of residuals has to be minimized according to the general least squares (LS) criterion... [Pg.157]

The system identification step in the core-box modeling framework has two major sub-steps parameter estimation and model quality analysis. The parameter estimation step is usually solved as an optimization problem that minimizes a cost function that depends on the model s parameters. One choice of cost function is the sum of squares of the residuals, Si(t p) = yi(t) — yl(t p). However, one usually needs to put different weights, up (t), on the different samples, and additional information that is not part of the time-series is often added as extra terms k(p). These extra terms are large if the extra information is violated by the model, and small otherwise. A general least-squares cost function, Vp(p), is thus of the form... [Pg.126]

The general least-squares treatment requires that the generalized sum of squares of the residuals, the variance a2, be minimized. This is, by the geometry of error space, tantamount to the requirement that the residual vector be orthogonal with respect to fit space, and this is guaranteed when the scalar products of all fit vectors (the rows of XT) with the residual vector vanish, XTM 1 = 0, where M 1 is the metric of error space. The successful least-squares treatment [34] yields the following minimum-variance linear unbiased estimators (A) for the variables, their covariance matrix, the variance of the fit, the residuals, and their covariance matrix ... [Pg.73]

The particular choice of a residual variance model should be based on the nature of the response function. Sometimes 4> is unknown and must be estimated from the data. Once a structural model and residual variance model is chosen, the choice then becomes how to estimate 0, the structural model parameters, and <, the residual variance model parameters. One commonly advocated method is the method of generalized least-squares (GLS). First it will be assumed that < is known and then that assumption will be relaxed. In the simplest case, assume that 0 is known, in which case the weights are given by... [Pg.132]

Within NONMEM, a generalized least-squares-like (GLS-like) estimation algorithm can be developed by iterating separate, sequential models. In the first step, the model is fit using one of the estimation algorithms (FO-approximation, FOCE, etc.). The individual predicted values are saved in a data set that is formatted the same as the input data set, i.e., the output data set contains the original data set plus one more variable the individual predicted values. The second step then models the residual error based on the value of the individual predicted values given in the previous step. So, for example, suppose the residual error was modeled as a proportional error model... [Pg.230]

The study of the residuals is very important in diagnostic statistics. Let us return to the general least squares model given in Eq. (6.41). We rewrite it here for a singley variable as follows ... [Pg.247]

In this section, we present an iterative algorithm in the spirit of the generalized least squares approach (Goodwin and Payne, 1977), for simultaneous estimation of an FSF process model and an autoregressive (AR) noise model. The unique features of our algorithm are the application of the PRESS statistic introduced in Chapter 3 for both process and noise model structure selection to ensure whiteness of the residuals, and the use of covariance matrix information to derive statistical confidence bounds for the final process step response estimates. An important assumption in this algorithm is that the noise term k) can be described by an AR time series model given by... [Pg.119]

The field points must then be fitted to predict the activity. There are generally far more field points than known compound activities to be fitted. The least-squares algorithms used in QSAR studies do not function for such an underdetermined system. A partial least squares (PLS) algorithm is used for this type of fitting. This method starts with matrices of field data and activity data. These matrices are then used to derive two new matrices containing a description of the system and the residual noise in the data. Earlier studies used a similar technique, called principal component analysis (PCA). PLS is generally considered to be superior. [Pg.248]

Once the form of the correlation is selected, the values of the constants in the equation must be determined so that the differences between calculated and observed values are within the range of assumed experimental error for the original data. However, when there is some scatter in a plot of the data, the best line that can be drawn representing the data must be determined. If it is assumed that all experimental errors (s) are in thejy values and the X values are known exacdy, the least-squares technique may be appHed. In this method the constants of the best line are those that minimise the sum of the squares of the residuals, ie, the difference, a, between the observed values,jy, and the calculated values, Y. In general, this sum of the squares of the residuals, R, is represented by... [Pg.244]

If the explicit solution cannot be used or appears impractical, we have to return to the general formulation of the problem, given at the beginning of the last section, and search for a solution without any simplifying assumptions. The system of normal equations (34) can be solved numerically in the following simple way (164). Let us choose an arbitrary value x(= T ) and search for the optimum ordinate of the point of intersection y(= log k) and optimum values of slopes bj to give the least residual sum of squares Sx (i.e., the least possible with a fixed value of x). From the first and third equations of the set eq. (34), we get... [Pg.448]

The converged parameter values represent the Least Squares (LS), Weighted LS or Generalized LS estimates depending on the choice of the weighting matrices Q,. Furthermore, if certain assumptions regarding the statistical distribution of the residuals hold, these parameter values could also be the Maximum Likelihood (ML) estimates. [Pg.53]

The unweighted least squares analysis is based on the assumption that the best value of the rate constant k is the one t,hat minimizes the sum of the squares of the residuals. In the general case one should regard the zero time point as an adjustable constant in order to avoid undue weighting of the initial point. An analysis of this type gives the following expressions for first-and second-order rate constants... [Pg.55]

With the method of least squares, we obtain three independent equations to be solved for the three constants of the quadratic equation. The procedure follows from the assumption that the best expression is the one for which the sum of the squares of the residuals is a minimum. If we define the residual for the general quadratic expression as... [Pg.532]

We will use the constraint that the sum of squares of the residuals be minimal. The following is a brief development of the matrix approach to the least squares fitting of linear models to data. The approach is entirely general for all linear models. [Pg.77]

Least squares (LS) estimation minimizes the sum of squared deviations, comparing observed values to values predicted by a curve with particular parameter values. Weighted LS (WLS) can take into account differences in the variances of residuals generalized LS (GLS) can take into account covariances of residuals as well as differences in weights. Cases of LS estimation include the following ... [Pg.35]

We want an estimate of the regression coefficients a and p. If we graph the data using the ordinate (y-axis) for the response variable and the abscissa (x-axis) for the explanatoiy variable, the data will appear as a scatter of points. What we seek are the values of a and 3 that will produce the best fit line through the data. The principle that is generally used is that of least squares. The idea is to look at the differences, or residuals, between the observed values of the response variable and the values predicted by the estimated regression line (Figure 21.6). [Pg.304]

Another approach is to prepare a stock solution of high concentration. Linearity is then demonstrated directly by dilution of the standard stock solution. This is more popular and the recommended approach. Linearity is best evaluated by visual inspection of a plot of the signals as a function of analyte concentration. Subsequently, the variable data are generally used to calculate a regression line by the least-squares method. At least five concentration levels should be used. Under normal circumstances, linearity is acceptable with a coefficient of determination (r2) of >0.997. The slope, residual sum of squares, and intercept should also be reported as required by ICH. [Pg.735]

As mentioned previously, the task of model-based data fitting for a given matrix Y is to determine the best rate constants defining the matrix C, as well as the best molar absorptivities collected in the matrix A. The quality of the fit is represented by the matrix of residuals, R = Y - C x A. Assuming white noise, i.e., normally distributed noise of constant standard deviation, the sum of the squares, ssq, of all elements is statistically the best measure to be minimized. This is generally called a least-squares fit. [Pg.222]

The actual noise distribution in Y is often unknown, but generally a normal distribution is assumed. White noise signifies that all experimental standard deviations, individual measurements, y, are the same and uncorrelated. The least-squares criterion applied to the residuals delivers the most likely parameters only under the condition of so-called white noise. However, even if this prerequisite is not fulfilled, it is usually still useful to perform the least-squares fit. This makes it the most commonly applied method for data fitting. [Pg.237]


See other pages where Residuals, general least squares is mentioned: [Pg.87]    [Pg.157]    [Pg.150]    [Pg.131]    [Pg.36]    [Pg.108]    [Pg.150]    [Pg.204]    [Pg.86]    [Pg.230]    [Pg.218]    [Pg.427]    [Pg.428]    [Pg.78]    [Pg.306]    [Pg.483]    [Pg.130]    [Pg.52]    [Pg.757]    [Pg.756]    [Pg.14]    [Pg.176]    [Pg.240]    [Pg.39]    [Pg.73]    [Pg.671]    [Pg.6317]   
See also in sourсe #XX -- [ Pg.73 ]




SEARCH



General Least Squares

Generalized least squares

Least squares residual

Residuals squares

© 2024 chempedia.info