Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Covariance error estimate

Accordingly, we have for the estimate of the variables, the measurement errors, and the error estimate covariance... [Pg.115]

Sg covariance matrix of the error estimates covariance matrix of variable estimates X Lagrangian multipliers... [Pg.124]

V covariance matrix of measurement error estimates W covariance matrix of d... [Pg.150]

This gives us the new error covariance when the measurement at time t = t is processed as a function of the previous one. Note again that this formula is equivalent to the expressions in Chapter 6 when considering different blocks of information. The process can be iterated at each step with the introduction or deletion of new observations. By induction, the covariance for the error estimate at the ith step can be written as... [Pg.183]

Now, since the purpose of an experimental program is to gain information, in attempting to design the experiment we will try to plan the measurements in such a way that the final error estimate covariance is minimal. In our case, this can be achieved in a sequential manner by applying the matrix inversion lemma to Eq. (9.17), as we have shown in previous chapters. [Pg.183]

This iteration process can be repeated until a satisfactory solution is obtained. In general is not easy to determine when a solution is satisfactory. The simplest method is to investigate the value of the covariance matrix of the error estimate and break off the iteration process if this covariance matrix falls below a given value or decreases less than a given fraction from one step to the next. [Pg.166]

Linear systems over a gaussian random vector. If x or X is a gaussian vector with mean value m and covariance (the minimum square error estimate for x is x and x = m, ) which is considered to be in a formal linear system completed with a zero-mean gaussian vector (v is N (0, R)) then we have ... [Pg.180]

Consequently, the minimum mean square error estimate for y is y = Hm and the associated covariance of this is Cy= HC H + R. [Pg.181]

The inverse matrix, B, is normalized by the reduced [Equation (13)] to give the variance-covariance matrix. The square roots of the diagonal elements of this normalized matrix are the estimated errors in the values of the shifts and, thus, those for the parameters themselves. These error estimates are based solely on the statistical errors in the original powder diffraction pattern intensities and can not accommodate the possible discrepancies arising from systematic flaws in the model. Consequently, the models used to describe the powder diffraction profile must accurately represent a close correspondence to... [Pg.269]

More difficult is the estimation of errors for the nonlinear parameters, since no variance-covariance matrix exists. Frequently, the error estimations are restricted to a locally linear range. In the linearization range, the confidence bands for the parameters are then calculated as in the linear case (Eqs. (6.25)-(6.27)). An alternative consists in error estimations on the basis of Monte Carlo simulations or bootstrapping methods (cf. Section 8.2). [Pg.262]

Extensions of Kalman filters and Luenberger observers [131 Solution polymerizations (conversion and molecular weight estimation) with and without on-line measurements for A4w [102, 113, 133, 134] Emulsion polymerization (monomer concentration in the particles with parameter estimation or not (n)) [45, 139[ Heat of reaction and heat transfer coefficient in polymerization reactors [135, 141, 142] Computationally fast, reiterative and constrained algorithms are more robust, multi-rate (having fast/ frequent and slow measurements can be handled)/Trial and error required for tuning the process and observation model covariance errors, model linearization required The number of industrial applications is scarce A critical article by Wilson eta/. [143] reviews the industrial implementation and shows their experiences at Ciba. Their main conclusion is that the superior performance of state estimation techniques over open-loop observers cannot be guaranteed. [Pg.335]

Li and Der Kiureghian (1993) introduced a spectral decomposition of the nodal covariance matrix. They showed that the maximum error of the KL expansion is not always smaller than the error of Kriging for a given number of retained terms. The point-wise variance error estimator of the KL expansion for a given order of truncation is smaller than the error of Kriging in the interior of the discretization domain but larger at the boundaries. Note however that the... [Pg.3473]

If the measurement process satisfies certain properties, such as zero mean error, the Kalman Filter provides an optimal method for the fusion of data, especially when is used the method of least squares. The Kalman Filter, particularly in mobile robotics applications, is used to keep an advance estimate of the position and orientation of the vehicle, or parameters that describe objects of interest in the environment, such as another mobile robot that is traveling in the same environment. The Kalman Filter allows an estimate of the existing position of the robot, for example, which can be combined with information from the position from one or more sensors. An attribute of the Kalman Filter is it provides an advance estimate of not only as a variety of parameters, but also of relative confidence in these estimates in the form of covariance matrix. In certain circumstances, the Kalman Filter performs these updates in an optimal manner and effectively minimizes the expected error estimate. [Pg.214]

Estimation error covariance matrix %Closed-loop estimator eigenvalues... [Pg.411]

One way to proceed with this example is to estimate the error structure. Then using two different estimates, the sensitivity of the results to the assumed error structure can be examined. The first estimate of the covariance matrix used here is... [Pg.287]

By way of illustration, the regression parameters of a straight line with slope = 1 and intercept = 0 are recursively estimated. The results are presented in Table 41.1. For each step of the estimation cycle, we included the values of the innovation, variance-covariance matrix, gain vector and estimated parameters. The variance of the experimental error of all observations y is 25 10 absorbance units, which corresponds to r = 25 10 au for all j. The recursive estimation is started with a high value (10 ) on the diagonal elements of P and a low value (1) on its off-diagonal elements. [Pg.580]

The sequence of the innovation, gain vector, variance-covariance matrix and estimated parameters of the calibration lines is shown in Figs. 41.1-41.4. We can clearly see that after four measurements the innovation is stabilized at the measurement error, which is 0.005 absorbance units. The gain vector decreases monotonously and the estimates of the two parameters stabilize after four measurements. It should be remarked that the design of the measurements fully defines the variance-covariance matrix and the gain vector in eqs. (41.3) and (41.4), as is the case in ordinary regression. Thus, once the design of the experiments is chosen... [Pg.580]

At this point let us assume that the covariance matrices (E,) of the measured responses (and hence of the error terms) during each experiment are known precisely. Obviously, in such a case the ML parameter estimates are obtained by minimizing the following objective function... [Pg.16]

Having an estimate (through the error propagation law) of the covariance matrix L, we can obtain the ML parameter estimates by minimizing the objective function,... [Pg.21]

We also use a linearized covariance analysis [34, 36] to evaluate the accuracy of estimates and take the measurement errors to be normally distributed with a zero mean and covariance matrix Assuming that the mathematical model is correct and that our selected partitions can represent the true multiphase flow functions, the mean of the error in the estimates is zero and the parameter covariance matrix of the errors in the parameter estimates is ... [Pg.378]

The adjustment of measurements to compensate for random errors involves the resolution of a constrained minimization problem, usually one of constrained least squares. Balance equations are included in the constraints these may be linear but are generally nonlinear. The objective function is usually quadratic with respect to the adjustment of measurements, and it has the covariance matrix of measurements errors as weights. Thus, this matrix is essential in the obtaining of reliable process knowledge. Some efforts have been made to estimate it from measurements (Almasy and Mah, 1984 Darouach et al., 1989 Keller et al., 1992 Chen et al., 1997). The difficulty in the estimation of this matrix is associated with the analysis of the serial and cross correlation of the data. [Pg.25]


See other pages where Covariance error estimate is mentioned: [Pg.55]    [Pg.89]    [Pg.89]    [Pg.426]    [Pg.1139]    [Pg.16]    [Pg.493]    [Pg.256]    [Pg.50]    [Pg.98]    [Pg.2569]    [Pg.415]    [Pg.378]    [Pg.110]    [Pg.549]    [Pg.356]    [Pg.367]    [Pg.579]    [Pg.579]    [Pg.580]    [Pg.581]    [Pg.582]    [Pg.635]   
See also in sourсe #XX -- [ Pg.94 , Pg.106 , Pg.113 , Pg.164 ]

See also in sourсe #XX -- [ Pg.94 , Pg.106 , Pg.113 , Pg.164 ]




SEARCH



Covariance

Covariance estimated

Covariant

Covariates

Covariation

Error estimate

Error estimating

Error estimation

Estimate covariance

Estimated error

© 2024 chempedia.info