Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Covariance matrix general

Equations (41.15) and (41.19) for the extrapolation and update of system states form the so-called state-space model. The solution of the state-space model has been derived by Kalman and is known as the Kalman filter. Assumptions are that the measurement noise v(j) and the system noise w(/) are random and independent, normally distributed, white and uncorrelated. This leads to the general formulation of a Kalman filter given in Table 41.10. Equations (41.15) and (41.19) account for the time dependence of the system. Eq. (41.15) is the system equation which tells us how the system behaves in time (here in j units). Equation (41.16) expresses how the uncertainty in the system state grows as a function of time (here in j units) if no observations would be made. Q(j - 1) is the variance-covariance matrix of the system noise which contains the variance of w. [Pg.595]

All of the previous ideas are developed further in Chapter 8, where the analysis of dynamic and quasi-steady-state processes is considered. Chapter 9 is devoted to the general problem of joint parameter estimation-data reconciliation, an important issue in assessing plant performance. In addition, some techniques for estimating the covariance matrix from the measurements are discussed in Chapter 10. New trends in this field are summarized in Chapter 11, and the last chapter is devoted to illustrations of the application of the previously presented techniques to various practical cases. [Pg.17]

The adjustment of measurements to compensate for random errors involves the resolution of a constrained minimization problem, usually one of constrained least squares. Balance equations are included in the constraints these may be linear but are generally nonlinear. The objective function is usually quadratic with respect to the adjustment of measurements, and it has the covariance matrix of measurements errors as weights. Thus, this matrix is essential in the obtaining of reliable process knowledge. Some efforts have been made to estimate it from measurements (Almasy and Mah, 1984 Darouach et al., 1989 Keller et al., 1992 Chen et al., 1997). The difficulty in the estimation of this matrix is associated with the analysis of the serial and cross correlation of the data. [Pg.25]

General case. If some measurement errors are correlated, the covariance matrix is not diagonal. It is assumed that we know the sensors which are subjected to correlated measurement errors, for example because they share some common elements (e.g., power supplies). There are then s off-diagonal elements of d>,. .., w),... [Pg.207]

However, care must be taken to avoid the singularity that occurs when C is not full rank. In general, the rank of C will be equal to the number of random variables needed to define the joint PDF. Likewise, its rank deficiency will be equal to the number of random variables that can be expressed as linear functions of other random variables. Thus, the covariance matrix can be used to decompose the composition vector into its linearly independent and linearly dependent components. The joint PDF of the linearly independent components can then be approximated by (5.332). [Pg.239]

In the more general case, the vector Xn with mean p and nxn covariance matrix... [Pg.205]

Figure 6.15 visualizes the different cluster models mentioned above. The left picture is the result of using a model with the same form cr2/ for all clusters. The middle picture changes the cluster size with crjl. The right picture shows the most general cluster model, each having a different covariance matrix Xj. Clearly, there exist several more different possible model classes. [Pg.282]

The generalized Fisher theorems derived in this section are statements about the space variation of the vectors of the relative and absolute space-specific rates of growth. These vectors have a simple natural (biological, chemical, physical) interpretation They express the capacity of a species of type u to fill out space in genetic language, they are space-specific fitness functions. In addition, the covariance matrix of the vector of the relative space-specific rates of growth, gap, [Eq. (25)] is a Riemannian metric tensor that enters the expression of a Fisher information metric [Eqs. (24) and (26)]. These results may serve as a basis for solving inverse problems for reaction transport systems. [Pg.180]

In this chapter, we will examine the variance-covariance matrix to see how the location of experiments in factor space (i.e., the experimental design) affects the individual variances and covariances of the parameter estimates. Throughout this section we will be dealing with the specific two-parameter first-order model y, = Pq + + li only the resulting principles are entirely general, however, and can be... [Pg.119]

The value of Eq. (4.32) is that it shows exactly which features of the structure are well determined and which are poorly determined in the fitting procedure. For the two-dimensional example of Fig. 4.1, the eigenparameters correspond to the principal axes of the variance-covariance ellipsoid in the figure. In general, they define the principal axes of the hyper-ellipsoid in w-dimensional parameter space which represents the variance-covariance matrix. [Pg.79]

Instead of the imivariate Fisher ratio, SLDA considers the ratio between the generalized within-category dispersion (the determinant of the pooled within-category covariance matrix) and total dispersion (the determinant of the generalized covariance matrix). This ratio is called Wilks lambda, and the smaller it is, the better the separation between categories. The selected variable is that that produces the maximum decrease of Wilks lambda, tested by a suitable F statistic for the input of a new variable or for the deletion of a previously selected one. [Pg.134]

Use the delta method to obtain the asymptotic variances and covariance of these two functions assuming the data are drawn from a normal distribution with mean ju and variance o2. (Hint Under the assumptions, the sample mean is a consistent estimator of //, so for purposes of deriving asymptotic results, the difference between X and // may be ignored. As such, no generality is lost by assuming the mean is zero, and proceeding from there. Obtain V, the 3x3 covariance matrix for the three moments, then use the delta method to show that the covariance matrix for the two estimators is... [Pg.93]

For the general, multiparameter case, the product of the purely experimental uncertainty estimate, jpe, and the (X X)-1 matrix gives the estimated variance-covariance matrix, V. [Pg.105]

PLS is similar to PCR with the exception that the matrix decomposition for PLS is performed on the covariance matrix of the spectra and the reference concentrations, while for PCR only the spectra are used. PLS and PCR have similar performance if noise in the spectral data and errors in the reference concentration measurements are negligible. Otherwise, PLS generally provides better analysis than PCR.26... [Pg.338]

This iteration process can be repeated until a satisfactory solution is obtained. In general is not easy to determine when a solution is satisfactory. The simplest method is to investigate the value of the covariance matrix of the error estimate and break off the iteration process if this covariance matrix falls below a given value or decreases less than a given fraction from one step to the next. [Pg.166]

For convenience, we normalized the univariate normal distribution so that it had a mean of zero and a standard deviation of one (see Section 3.1.2, Equation 3.5 and Equation 3.6). In a similar fashion, we now define the generalized multivariate squared distance of an object s data vector, x , from the mean, ju, where 2 is the variance-covariance matrix (described later) ... [Pg.52]

The sample of individuals is assumed to represent the patient population at large, sharing the same pathophysiological and pharmacokinetic-dynamic parameter distributions. The individual parameter 0 is assumed to arise from some multivariate probability distribution 0 / (T), where jk is the vector of so-called hyperparameters or population characteristics. In the mixed-effects formulation, the collection of jk is composed of population typical values (generally the mean vector) and of population variability values (generally the variance-covariance matrix). Mean and variance characterize the location and dispersion of the probability distribution of 0 in statistical terms. [Pg.312]

The general least-squares treatment requires that the generalized sum of squares of the residuals, the variance a2, be minimized. This is, by the geometry of error space, tantamount to the requirement that the residual vector be orthogonal with respect to fit space, and this is guaranteed when the scalar products of all fit vectors (the rows of XT) with the residual vector vanish, XTM 1 = 0, where M 1 is the metric of error space. The successful least-squares treatment [34] yields the following minimum-variance linear unbiased estimators (A) for the variables, their covariance matrix, the variance of the fit, the residuals, and their covariance matrix ... [Pg.73]

The vector of observations, with the components of Eq. 46, is then y = p - tt. Provided the rotational parameters (rotational constants or moments) of the isotopomer s, as evaluated from the spectrum, are not correlated with those of any other isotopomer s, in particular not with those of the parent (which is a general supposition in MRR-spectroscopy), the required covariance matrix 0y is ... [Pg.84]

The multiresponse counterpart of (1/covariance matrix 27 , which exists only if 27 has full rank. This condition is achievable by selecting a linearly independent set of responses, as described in Chapter 7. Then the exponential function in Eq. (4.2-11) may be generalized to... [Pg.73]

The methods of Chapter 6 are not appropriate for multiresponse investigations unless the responses have known relative precisions and independent, unbiased normal distributions of error. These restrictions come from the error model in Eq. (6.1-2). Single-response models were treated under these assumptions by Gauss (1809, 1823) and less completely by Legendre (1805), co-discoverer of the method of least squares. Aitken (1935) generalized weighted least squares to multiple responses with a specified error covariance matrix his method was extended to nonlinear parameter estimation by Bard and Lapidus (1968) and Bard (1974). However, least squares is not suitable for multiresponse problems unless information is given about the error covariance matrix we may consider such applications at another time. [Pg.141]


See other pages where Covariance matrix general is mentioned: [Pg.97]    [Pg.98]    [Pg.99]    [Pg.582]    [Pg.24]    [Pg.226]    [Pg.295]    [Pg.6]    [Pg.415]    [Pg.307]    [Pg.174]    [Pg.282]    [Pg.183]    [Pg.158]    [Pg.98]    [Pg.113]    [Pg.120]    [Pg.184]    [Pg.335]    [Pg.64]    [Pg.316]    [Pg.24]    [Pg.64]    [Pg.73]    [Pg.94]    [Pg.72]    [Pg.184]    [Pg.164]   
See also in sourсe #XX -- [ Pg.592 , Pg.595 ]




SEARCH



Covariance

Covariance matrices general least squares

Covariance matrix

Covariant

Covariates

Covariation

Matrix, general

Matrix, generally

© 2024 chempedia.info