Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Positive covariance

Some variables often have dependencies, such as reservoir porosity and permeability (a positive correlation) or the capital cost of a specific equipment item and its lifetime maintenance cost (a negative correlation). We can test the linear dependency of two variables (say x and y) by calculating the covariance between the two variables (o ) and the correlation coefficient (r) ... [Pg.165]

To conclude this section we make a few remarks concerning the physical interpretation of the covariant amplitude tfr(x). For a free particle one would surmise that adoption of the manifold of positive... [Pg.535]

The formalism can be carried farther to discuss the particle observables and also the transformation properties of the s and of the scalar product under Lorentz transformations. Since in our subsequent discussion we shall be primarily interested in the covariant amplitudes describing the photon, we shall not here carry out these considerations. We only mention that a position operator q having the properties that ... [Pg.550]

We shall adopt Eqs. (9-510) and (9-511) as the covariant wave equation for the covariant four-vector amplitude 9ttf(a ) describing a photon. The physically realizable amplitudes correspond to positive frequency solutions of Eq. (9-510), which in addition satisfy the subsidiary condition (9-511). In other words the admissible wave functions satisfy... [Pg.552]

Here C = aa ) is the covariance of the basis functions used to model the turbulence. Covariance matrices are positive semi-definite by dehnition which implies a C a > 0, and thus a dehned maximum of Pr a exists. [Pg.380]

Hoetelmans RM (1999) Pharmacology of antiretroviral drugs. Antivir Ther 4(Suppl 3) 29 1 Hoffman NG, Schiffer CA, Swanstrom R (2003) Covariation of amino add positions in HlV-1 protease. Virology 314 536-548... [Pg.105]

It can be shown that all symmetric matrices of the form X X and XX are positive semi-definite [2]. These cross-product matrices include the widely used dispersion matrices which can take the form of a variance-covariance or correlation matrix, among others (see Section 29.7). [Pg.31]

A theorem, which we do not prove here, states that the nonzero eigenvalues of the product AB are identical to those of BA, where A is an nxp and where B is a pxn matrix [3]. This applies in particular to the eigenvalues of matrices of cross-products XX and X which are of special interest in data analysis as they are related to dispersion matrices such as variance-covariance and correlation matrices. If X is an nxp matrix of rank r, then the product X X has r positive eigenvalues in A and possesses r eigenvectors in V since we have shown above that ... [Pg.39]

The matrix Cp contains the variances of the columns of X on the main diagonal and the covariances between the columns in the off-diagonal positions (see also Section 9.3.2.4.4). The correlation matrix Rp is derived from the column-standardized matrix Zp ... [Pg.49]

A later analysis (Emhart et al. 1987) related PbB levels obtained at delivery (maternal and cord blood) and at 6 months, 2 years, and 3 years of age to developmental tests (MDI, PDI, Kent Infant Development Scale [KID], and Stanford-Binet IQ) administered at 6 months, 1 year, 2 years, and 3 years of age, as appropriate. After controlling for covariates and confounding risk factors, the only significant associations of blood lead with concurrent or later development were an inverse association between maternal (but not cord) blood lead and MDI, PDI, and KID at 6 months, and a positive association between 6-month PbB and 6-month KID. The investigators concluded that, taken as a whole, the results of the 21 analyses of correlation between blood lead and developmental test scores were "reasonably consistent with what might be expected on the basis of sampling variability," that any association of blood lead level with measures of development was likely to be due to the dependence of both PbB and... [Pg.125]

Just as in everyday life, in statistics a relation is a pair-wise interaction. Suppose we have two random variables, ga and gb (e.g., one can think of an axial S = 1/2 system with gN and g ). The g-value is a random variable and a function of two other random variables g = f(ga, gb). Each random variable is distributed according to its own, say, gaussian distribution with a mean and a standard deviation, for ga, for example, (g,) and oa. The standard deviation is a measure of how much a random variable can deviate from its mean, either in a positive or negative direction. The standard deviation itself is a positive number as it is defined as the square root of the variance ol. The extent to which two random variables are related, that is, how much their individual variation is intertwined, is then expressed in their covariance Cab ... [Pg.157]

If two random variables are nncorrelated, then both their covariance Cab and their correlation coefficient rab are equal to zero. If two random variables are fully correlated, then the absolute value of their covariance is C,J = cacb, and the absolute value of their correlation coefficient is unity rab = 1. A key point to note for our EPR linewidth theory to be developed is that two fully correlated variables can be fully positively correlated rab = 1, or fully negatively correlated rab = -1. Of course, if two random variables are correlated to some extent, then 0 < Cab < oacb, and 0 < IrJ < 1. [Pg.157]

Equation 41-A3 can be checked by expanding the last term, collecting terms and verifying that all the terms of equation 41-A2 are regenerated. The third term in equation 41-A3 is a quantity called the covariance between A and B. The covariance is a quantity related to the correlation coefficient. Since the differences from the mean are randomly positive and negative, the product of the two differences from their respective means is also randomly positive and negative, and tend to cancel when summed. Therefore, for independent random variables the covariance is zero, since the correlation coefficient is zero for uncorrelated variables. In fact, the mathematical definition of uncorrelated is that this sum-of-cross-products term is zero. Therefore, since A and B are random, uncorrelated variables ... [Pg.232]

Let us review what we did with the depression example so far. First, we conjectured a taxon and three indicators. Next, we selected one of these indicators (anhedonia) as the input variable and two other indicators (sadness and suicidality) as the output variables. Input and output are labels that refer to a role of the indicator in a given subanalysis. We cut the input indicator into intervals, hence the word Cut in the name of the method (Coherent Cut Kinetics), and we looked at the relationship between the output indicators. Specifically, we calculated covariances of the output indicators in each interval, hence the word Kinetics —we moved calculations from interval to interval. Suppose that after all that was completed, we find a clear peak in the covariance of sadness and suicidality, which allows us to estimate the position of the hitmax and the taxon base rate. What next Now we need to get multiple estimates of these parameters. To achieve this, we change the... [Pg.42]

In the context of our discussion in this chapter, we represent the measurement obtained using the waveform 4> as a Gaussian measurement with covariance Rfy The current state of the system is represented by the state covariance matrix P. Of course, the estimated position and velocity of the target is also important for the tracking function of the radar, but in this context they play no role in the choice of waveforms. In a clutter rich (and varying) scenario, the estimate of the target parameters will clearly play a more important role. The expected information obtained from a measurement with such a waveform, given the current state of... [Pg.278]

We assume a knowledge of the possible state covariances P generated by the tracking system. This knowledge is statistical and is represented by a probability distribution F(P) over the space of all positive definite matrices. [Pg.279]


See other pages where Positive covariance is mentioned: [Pg.341]    [Pg.349]    [Pg.369]    [Pg.3846]    [Pg.341]    [Pg.349]    [Pg.369]    [Pg.3846]    [Pg.87]    [Pg.250]    [Pg.250]    [Pg.250]    [Pg.502]    [Pg.654]    [Pg.345]    [Pg.25]    [Pg.206]    [Pg.166]    [Pg.172]    [Pg.71]    [Pg.72]    [Pg.82]    [Pg.97]    [Pg.273]    [Pg.279]    [Pg.298]    [Pg.6]    [Pg.61]    [Pg.91]    [Pg.132]    [Pg.350]    [Pg.351]    [Pg.41]    [Pg.112]    [Pg.33]    [Pg.95]    [Pg.120]   
See also in sourсe #XX -- [ Pg.125 , Pg.127 ]




SEARCH



Covariance

Covariance symmetric/positive-definite

Covariant

Covariates

Covariation

© 2024 chempedia.info