Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Weighted least-squares estimator

Supposing D = Cov(e) to be known, we would possibly improve our estimation procedure by weighting more those points of which we are more certain, that is, those whose associated errors have the least variance, taking also into account the correlations among the errors. We may then indicate with 0 the weighted least-squares estimator (WLSE), which is the value of 0 minimizing... [Pg.79]

The error in variables method can be simplified to weighted least squares estimation if the independent variables are assumed to be known precisely or if they have a negligible error variance compared to those of the dependent variables. In practice however, the VLE behavior of the binary system dictates the choice of the pairs (T,x) or (T,P) as independent variables. In systems with a... [Pg.233]

The data reconciliation problem can be generally stated as the following constrained weighted least-squares estimation problem ... [Pg.95]

Using the Q-R orthogonal factorization method described in Chapter 4, the constrained weighted least-squares estimation problem (5.4) is transformed into an unconstrained one. The following steps are required ... [Pg.98]

Another approach for the determination of the kinetic parameters is to use the SAS NLIN (NonLINear regression) procedure (SAS, 1985) which produces weighted least-squares estimates of the parameters of nonlinear models. The advantages of this technique are that (1) it does not require linearization of the Michaelis-Menten equation, (2) it can be used for complicated multiparameter models, and (3) the estimated parameter values are reliable because it produces weighted least-squares estimates. [Pg.24]

The former summary of the calculations is preferred since we may wish to use the estimates s(H) in weighting the rate coefficients in a subsequent calculation (e.g. in evaluating the weighted least squares estimate of the activation energy). [Pg.413]

Now we can combine several measurements of 0. Ej) to form the variance-weighted least-squares estimate... [Pg.378]

Fortunately, the weighted least squares estimators can easily be computed from standard software programs, where tv is an x n weight matrix... [Pg.301]

The steady-state data were also used to estimate the best values of the surface rate constants, k and k. This was achieved by a non-linear weighted least squares estimation technique, given the experimentally obtained values of the adsorption/ desorption rate constants. These constants were subsequently used in the simulation of the cyclic runs. [Pg.517]

We recognize the equation in step 2 as the weighted least squares estimate on the adjusted observations. Iterating through these two steps until convergence finds the iteratively reweighted least squares estimates. [Pg.207]

A weighted least-squares analysis is used for a better estimate of rate law parameters where the variance is not constant throughout the range of measured variables. If the error in measurement is corrected, then the relative error in the dependent variable will increase as the independent variable increases or decreases. [Pg.173]

The weighted least-squares analysis is important for estimating parameter involving exponents. Examples are the eoneentration time data... [Pg.174]

If we decide to only estimate a finite number of basis modes we implicitly assume the coefficients of all the other modes are zero and that the covariance of the modes estimated is very large. Thus QN Q becomes large relative to C and in this case Eq. 16 simplifies to a weighted least squares formula... [Pg.381]

Principal covariates regression (PCovR) is a technique that recently has been put forward as a more flexible alternative to PLS regression [17]. Like CCA, RRR, PCR and PLS it extracts factors t from X that are used to estimate Y. These factors are chosen by a weighted least-squares criterion, viz. to fit both Y and X. By requiring the factors to be predictive not only for Y but also to represent X adequately, one introduces a preference towards the directions of the stable principal components of X. [Pg.342]

The CLS method hinges on accurately modelling the calibration spectra as a weighted sum of the spectral contributions of the individual analytes. For this to work the concentrations of all the constituents in the calibration set have to be known. The implication is that constituents not of direct interest should be modelled as well and their concentrations should be under control in the calibration experiment. Unexpected constituents, physical interferents, non-linearities of the spectral responses or interaction between the various components all invalidate the simple additive, linear model underlying controlled calibration and classical least squares estimation. [Pg.356]

The amount of NIPA is determined based upon external standard calibration. A non-weighted linear least-squares estimate of the calibration curve is used to calculate the amount of NIPA in the unknowns. The response of any given sample must not exceed the response of the most concentrated standard. If this occurs, dilution of the sample will be necessary. [Pg.367]

However, the requirement of exact knowledge of all covariance matrices (E i=l,2,...,N) is rather unrealistic. Fortunately, in many situations of practical importance, we can make certain quite reasonable assumptions about the structure of E, that allow us to obtain the ML estimates using Equation 2.21. This approach can actually aid us in establishing guidelines for the selection of the weighting matrices Q, in least squares estimation. [Pg.17]

SELECTION OF WEIGHTING MATRIX Q IN LEAST SQUARES ESTIMATION... [Pg.147]

To construct the Hill plot (Figure 5.10E), it was assumed that fimax was 0.654 fmol/mg dry wt., the Scatchard value. The slope of the plot is 1.138 with a standard deviation of 0.12, so it would not be unreasonable to suppose % was indeed 1 and so consistent with a simple bimolecular interaction. Figure 5.10B shows a nonlinear least-squares fit of Eq. (5.3) to the specific binding data (giving all points equal weight). The least-squares estimates are 0.676 fmol/mg dry wt. for fimax and... [Pg.178]

When more than two data points are available, the graphical method is much better to use than common averaging techniques. It gives one a visual picture of the fit of the data to 3.3.55. If one has several data points and estimates of the uncertainty in each point, a weighted least squares fit of the data would be appropriate. [Pg.63]

Historically, treatment of measurement noise has been addressed through two distinct avenues. For steady-state data and processes, Kuehn and Davidson (1961) presented the seminal paper describing the data reconciliation problem based on least squares optimization. For dynamic data and processes, Kalman filtering (Gelb, 1974) has been successfully used to recursively smooth measurement data and estimate parameters. Both techniques were developed for linear systems and weighted least squares objective functions. [Pg.577]

The most straightforward approach for solving nonlinear EVM problems is to use nonlinear programming to estimate zy and 6 simultaneously. In the traditional weighted least squares parameter estimation formulation there are only n optimization variables corresponding to the number of unknown parameters. In contrast, the simultaneous parameter estimation and data reconciliation formulation has (pM + n)... [Pg.186]

As was shown, the conventional method for data reconciliation is that of weighted least squares, in which the adjustments to the data are weighted by the inverse of the measurement noise covariance matrix so that the model constraints are satisfied. The main assumption of the conventional approach is that the errors follow a normal Gaussian distribution. When this assumption is satisfied, conventional approaches provide unbiased estimates of the plant states. The presence of gross errors violates the assumptions in the conventional approach and makes the results invalid. [Pg.218]

For least squares estimation, p is the errors squared function, and the influence function ft =u. For a very large residual, u — oo, and r also grows to infinity this means that a single outlier has a large influence on the estimation. That is, for least squares estimation, every observation is treated equally and has the same weight. [Pg.226]

In order to calculate Vx, and therefore the detection limit, it is necessary first to estimate Vy as a function of concentration and then to use this information to estimate the parameters of the calibration curve using weighted least squares (WLS) fitting. Rigorous application of WLS requires knowledge of relative weights, but the technique is already considered adequate when n 5 (18). [Pg.62]


See other pages where Weighted least-squares estimator is mentioned: [Pg.80]    [Pg.13]    [Pg.173]    [Pg.30]    [Pg.163]    [Pg.363]    [Pg.227]    [Pg.80]    [Pg.13]    [Pg.173]    [Pg.30]    [Pg.163]    [Pg.363]    [Pg.227]    [Pg.15]    [Pg.26]    [Pg.87]    [Pg.178]    [Pg.220]    [Pg.336]    [Pg.163]    [Pg.163]    [Pg.33]    [Pg.185]   


SEARCH



Estimate least squares

Least estimate

Least squares weighted

Parameter estimation weighted least-squares method

Residual Variance Model Parameter Estimation Using Weighted Least-Squares

Weighted Least Squares (WLS) Estimation

Weighted Linear Least Squares Estimation (WLS)

Weights estimating

© 2024 chempedia.info