Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Random covariance matrix

Measurement noise covariance matrix R The main problem with the instrumentation system was the randomness of the infrared absorption moisture eontent analyser. A number of measurements were taken from the analyser and eompared with samples taken simultaneously by work laboratory staff. The errors eould be approximated to a normal distribution with a standard deviation of 2.73%, or a varianee of 7.46. [Pg.295]

Equations (41.15) and (41.19) for the extrapolation and update of system states form the so-called state-space model. The solution of the state-space model has been derived by Kalman and is known as the Kalman filter. Assumptions are that the measurement noise v(j) and the system noise w(/) are random and independent, normally distributed, white and uncorrelated. This leads to the general formulation of a Kalman filter given in Table 41.10. Equations (41.15) and (41.19) account for the time dependence of the system. Eq. (41.15) is the system equation which tells us how the system behaves in time (here in j units). Equation (41.16) expresses how the uncertainty in the system state grows as a function of time (here in j units) if no observations would be made. Q(j - 1) is the variance-covariance matrix of the system noise which contains the variance of w. [Pg.595]

Here 4 is the target state vector at time index k and Wg contains two random variables which describe the unknown process error, which is assumed to be a Gaussian random variable with expectation zero and covariance matrix Q. In addition to the target dynamic model, a measurement equation is needed to implement the Kalman filter. This measurement equation maps the state vector t. to the measurement domain. In the next section different measurement equations are considered to handle various types of association strategies. [Pg.305]

The vector nk describes the unknown additive measurement noise, which is assumed in accordance with Kalman filter theory to be a Gaussian random variable with zero mean and covariance matrix R. Instead of the additive noise term nj( in equation (20), the errors of the different measurement values are assumed to be statistically independent and identically Gaussian distributed, so... [Pg.307]

The adjustment of measurements to compensate for random errors involves the resolution of a constrained minimization problem, usually one of constrained least squares. Balance equations are included in the constraints these may be linear but are generally nonlinear. The objective function is usually quadratic with respect to the adjustment of measurements, and it has the covariance matrix of measurements errors as weights. Thus, this matrix is essential in the obtaining of reliable process knowledge. Some efforts have been made to estimate it from measurements (Almasy and Mah, 1984 Darouach et al., 1989 Keller et al., 1992 Chen et al., 1997). The difficulty in the estimation of this matrix is associated with the analysis of the serial and cross correlation of the data. [Pg.25]

Most techniques for process data reconciliation start with the assumption that the measurement errors are random variables obeying a known statistical distribution, and that the covariance matrix of measurement errors is given. In Chapter 10 direct and indirect approaches for estimating the variances of measurement errors are discussed, as well as a robust strategy for dealing with the presence of outliers in the data set. [Pg.26]

However, care must be taken to avoid the singularity that occurs when C is not full rank. In general, the rank of C will be equal to the number of random variables needed to define the joint PDF. Likewise, its rank deficiency will be equal to the number of random variables that can be expressed as linear functions of other random variables. Thus, the covariance matrix can be used to decompose the composition vector into its linearly independent and linearly dependent components. The joint PDF of the linearly independent components can then be approximated by (5.332). [Pg.239]

The eigenvalue/eigenvector decomposition of the covariance matrix thus allows us to redefine the problem in terms of Nc independent, standard normal random variables 0in. [Pg.239]

For a vector X of n random variables with mean vector p and nxn symmetric covariance-matrix an m-point sample is a matrix X with n rows and m columns. [Pg.203]

Parallel to the case of a single random variable, the mean vector and covariance matrix of random variables involved in a measurement are usually unknown, suggesting the use of their sampling distributions instead. Let us assume that x is a vector of n normally distributed variables with mean n-column vector ft and covariance matrix L. A sample of m observations has a mean vector x and annxn covariance matrix S. The properties of the t-distribution are extended to n variables by stating that the scalar m(x—p)TS ( —p) is distributed as the Hotelling s-T2 distribution. The matrix S/m is simply the covariance matrix of the estimate x. There is no need to tabulate the T2 distribution since the statistic... [Pg.206]

A random n-vector X has a mean vector fi and an n x n covariance matrix . a is the diagonal matrix with standard deviations as diagonal terms and p the correlation matrix. Find the correlation matrix of the reduced vector given by... [Pg.208]

We have considered in some detail in Section 4.2 the case where the random vector Y of n ancillary or dependent variables relates linearly to those of a vector X of n principal or independent variables (e.g., raw data) with covariance matrix L through the matrix equality... [Pg.219]

If we now consider an n-vector X of n random variables ( data ) with mean px and covariance matrix related to a vector Y of m ancillary variables through i= 1,..., m functions q>t... [Pg.225]

A vector x of n random variables has been measured m times, the ith measurement resulting in an estimate of the mean value x, and of the covariance matrix St. A best estimate Jt of the pooled ( weighted ) average makes the sum of squared statistical distances to each x minimum. The scalar expression... [Pg.285]

In Sections 1.6.3 and 1.6.4, different possibilities were mentioned for estimating the central value and the spread, respectively, of the underlying data distribution. Also in the context of covariance and correlation, we assume an underlying distribution, but now this distribution is no longer univariate but multivariate, for instance a multivariate normal distribution. The covariance matrix X mentioned above expresses the covariance structure of the underlying—unknown—distribution. Now, we can measure n observations (objects) on all m variables, and we assume that these are random samples from the underlying population. The observations are represented as rows in the data matrix X(n x m) with n objects and m variables. The task is then to estimate the covariance matrix from the observed data X. Naturally, there exist several possibilities for estimating X (Table 2.2). The choice should depend on the distribution and quality of the data at hand. If the data follow a multivariate normal distribution, the classical covariance measure (which is the basis for the Pearson correlation) is the best choice. If the data distribution is skewed, one could either transform them to more symmetry and apply the classical methods, or alternatively... [Pg.54]

Two methods are used to evaluate the predictive ability for LDA and for all other classification techniques. One method consists of dividing the objects of the whole data set into two subsets, the training and the prediction or evaluation set. The objects of the training set are used to obtain the covariance matrix and the discriminant scores. Then, the objects of the training set are classified, so obtaining the apparent error rate and the classification ability, and the objects of the evaluation set are classified to obtain the actual error rate and the predictive ability. The subdivision into the training and prediction sets can be randomly repeated many times, and with different percentages of the objects in the two sets, to obtain a better estimate of the predictive ability. [Pg.116]

Principal component analysis is based on the eigenvalue-eigenvector decomposition of the n h empirical covariance matrix Cy = X X (ref. 22-24). The eigenvalues are denoted by > 2 — Vi > where the last inequality follows from the presence of same random error in the data. Using the eigenvectors u, U2,. . ., un, define the new variables... [Pg.65]

Suppose we change the assumptions of the model in Section 5.3 to AS5 (x ) are an independent and identically distributed sequence of random vectors such that x, has a finite mean vector, finite positive definite covariance matrix Zxx and finite fourth moments E[xjxj xixm] = for all variables. How does the proof of consistency and asymptotic normality of b change Are these assumptions weaker or stronger than the ones made in Section 5.2 ... [Pg.18]

For random sampling from the classical regression model in (17-3), reparameterize the likelihood function in terms of 77 = 1/cr and 8 = (1 o)P- Find the maximum likelihood estimators of 77 and 8 and obtain the a symptotic covariance matrix of the estimators of these parameters. [Pg.90]

It is important to emphasize that all pharmacokinetic, fixed effect and random parameters, i.e. 0, co2, and a2, are fitted in one step as mean values with standard error by NONMEM. A covariance matrix of the random effects can be calculated. For a detailed description of the procedure see Grasela and Sheiner (1991) and Sheiner and Grasela (1991). [Pg.748]

If the original model is sufficiently perfect, the linearization of the problem adequate, the measurements unbiased (no systematic error), and the covariance matrix of the observations, 0y, a true representation of the experimental errors and their correlations, then c2 (Eq. 21c) should be near unity [34], If 0y is indeed an honest assessment of the experimental errors, but a2 is nonetheless (much) larger than unity, model deficiencies are the most frequent source of this discrepancy. Relevant variables probably exist that have not been included in the model, and the experimental precision is hence better than can be utilized by the available model. Model errors have then been treated as if they were experimental random errors, and the results must be interpreted with great caution. In this often unavoidable case, it would clearly be meaningless to make a difference between a measurement with a small experimental error (below the useful limit of precision) and another measurement with an even smaller error (see ref. [41 ). A deliberate modification of the variance-covariance matrix y towards larger and more equal variances might then be indicated, which results in a more equally weighted and less correlated matrix. [Pg.75]

Consider the real mxm covariance matrix S and real random error vector defined in Section 4.4. E satisfies... [Pg.75]

A real, symmetric matrix A is called positive definite if x Ax > 0 for every conforming nonzero real vector x. Extend the result of (a) to show that the covariance matrix E in Eq. (4.C-1) is positive definite if the scalar random variables i ,.... Emu are linearly independent, that is, if there is no nonzero m-vector x such that x Eu vanishes over the sample space of the random vector . [Pg.75]

When the estimation procedure is clearly specified, an approximate covariance matrix of the estimate, Sj, can also be calculated. This matrix reflects the degree of precision of the estimate, and depends on the experimental design, parameters, and the noise statistics. A well-designed experiment with small random fluctuations will lead to precise estimations ( small covariance), while a small number of iminformative data and/or a high level of noise will produce unreliable estimates ( large covariance). [Pg.2948]

The first attempt at estimating interindividual pharmacokinetic variability without neglecting the difficulties (data imbalance, sparse data, subject-specific dosing history, etc.) associated with data from patients undergoing drug therapy was made by Sheiner et al. " using the Non-linear Mixed-effects Model Approach. The vector 9 of population characteristics is composed of all quantities of the first two moments of the distribution of the parameters the mean values (fixed effects), and the elements of the variance-covariance matrix that characterize random effects.f " " ... [Pg.2951]

The random effect parameters t, and Cy are independent (multivariate), normally distributed with zero means and variances Q and cP, respectively. O is the P X p covariance matrix of the p vector r],. Based on the fact that rj, and y are independent identically, normally distributed, and the linearization of Eq. (8), the expectation and variance-covariance of all observations for the ith individual (first two moments) are given by... [Pg.2951]


See other pages where Random covariance matrix is mentioned: [Pg.176]    [Pg.97]    [Pg.99]    [Pg.100]    [Pg.206]    [Pg.298]    [Pg.112]    [Pg.202]    [Pg.215]    [Pg.283]    [Pg.73]    [Pg.49]    [Pg.129]    [Pg.170]    [Pg.295]    [Pg.748]    [Pg.73]    [Pg.84]    [Pg.85]    [Pg.245]    [Pg.521]    [Pg.523]    [Pg.297]   
See also in sourсe #XX -- [ Pg.337 ]




SEARCH



Covariance

Covariance matrix

Covariant

Covariates

Covariation

Random matrix

© 2024 chempedia.info