Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Canonical variables

Equations (4) are called quasi-Hamiltonian because, even though they employ generalized velocities, they describe the motion in the space of canonical variables. Accordingly, numerical trajectories computed with appropriate integrators will conserve the symplectic structure. Eor example, an implicit leapfrog integrator can be expressed as... [Pg.125]

We can go one step further, however. Each of the above multiple regression relations is between a single variable (response) of one data set and a linear combination of the variables (predictors) from the other set. Instead, one may consider the multiple-multiple correlation, i.e. the correlation of a linear combination from one set with a linear combination of the other set. Such linear combinations of the original variables are variously called factors, components, latent variables, canonical variables or canonical variates (also see Chapters 9,17, 29, and 31). [Pg.319]

The particular linear combinations of the X- euid F-variables achieving the maximum correlation are the so-called first canonical variables, say tj = Xw, and u.-Yq,. The vectors of coefficients Wj and q, in these linear combinations are the canonical weights for the X-variables and T-variables, respectively. For the data of Table 35.5 they are found to be Wj = [0.583, -0.561] and qj = [0.737,0.731]. The correlation between these first canonical variables is called the first canonical correlation, p,. This maximum correlation turns out to be quite high p, = 0.95 R = 0.90), indicating a strong relation between the first canonical dimensions of X and Y. [Pg.319]

The next pair of canonical variates, t2 and U2 also has maximum correlation P2, subject, however, to the condition that this second pair should be uncorrelated to the first pair, i.e. t t2 = u U2 = 0. For the example at hand, this second canonical correlation is much lower p2 = 0.55 R = 0.31). For larger data sets, the analysis goes on with extracting additional pairs of canonical variables, orthogonal to the previous ones, until the data table with the smaller number of variables has been... [Pg.319]

A major limitation to the value of CCA thus already has become apparent in the example shown. There is no guarantee that the most important canonical variable t, (or U ) is highly correlated to any of the individual variables of X (or Y). It is possible then for the first canonical variable tj of X to be strongly correlated with Uj, yet to have very little predictive value for Y. In terms of principal components... [Pg.321]

We have seen that PCR and RRR form two extremes, with CCA somewhere in between. RRR emphasizes the fit of Y (criterion ii). Thus, in RRR the X-components t, preferably should correlate highly with the original T-variables. Whether X itself can be reconstructed ( back-fitted ) from such components t, is of no concern in RRR. With standard PCR, i.e. top-down PCR, the emphasis is initially more on the X-side (criterion i) than on the T-side. CCA emphasizes the importance of correlation whether the canonical variates t and u account for much variance in each respective data set is immaterial. Ideally, of course, one would like to have the best of all three worlds, i.e. when the major principal components of X (as in PCR) and the major principal components of Y (as in RRR) happen to be very similar to the major canonical variables (as in CCA). Is there a way to combine these three desiderata — summary of X, summary of Y and a strong link between the two — into a single criterion and to use this as a basis for a compromise method The PLS method attempts to do just that. [Pg.331]

A drawback of the method is that highly correlating canonical variables may contribute little to the variance in the data. A similar remark has been made with respect to linear discriminant analysis. Furthermore, CCA does not possess a direction of prediction as it is symmetrical with respect to X and Y. For these reasons it is now replaced by two-block or multi-block partial least squares analysis (PLS), which bears some similarity with CCA without having its shortcomings. [Pg.409]

The ensemble average of a quantity u, which is a function of the canonical variables Pi and qit is given by the integral... [Pg.444]

StUl, we have the following relation for canonical variables x and p ... [Pg.139]

Because of the aforementioned EDA hypotheses, the ellipses of different categories present equal eccentricity and axis orientation they only differ for their location in the plane. By coimecting the intersection points of each couple of corresponding ellipses, a straight line is identified which corresponds to the delimiter between the two classes (see Eig. 2.15B). Eor this reason, this technique is called linear discriminant analysis. The directions which maximize the separation between classes are called EDA canonical variables. [Pg.88]

What can we see from these results The point x° is not a maximum, since the first eigenvalue is positive. Selecting the canonical variables w 0, w2 = w3 = № can increase the value of . By orthogonality of the... [Pg.60]

Like many other properties of the N-particle system, the electric dipole moment, p(p, q, t), is a function of the canonical variables and time. We define the classical dipole autocorrelation function, according to... [Pg.233]

The model consists of a system S in interaction with a bath B. The system S is a harmonic oscillator with frequency Q. Its canonical variables are q0, p0 and its Hamiltonian is... [Pg.428]

Note that, in this case, the magnetic lines are contained in magnetic surfaces. There are in fact two families of them, given by the equations p = po and q = qo, where each line forms the intersection of two surfaces, one of each family (there are also two families of electric surfaces u = it0 and v = Vo). The functions (p, q) and ( , v) are the Clebsch variables of B and E, respectively [63,64], They can be used as canonical variables [62]. As explained above, they are not uniquely defined, but may be changed by canonical transformations. [Pg.234]

We represent the phase-volume element dV in the form dT = dhdl dtp0, where h and canonically conjugated variables, (p0 is an initial instant that enters in addition to time (p in the law of motion of a dipole. Another pair of canonical variables is /, 0 we omit differential df>() in dT, since the variables we use do not depend on the azimuthal coordinate < )0. [Pg.182]

Canonical correlation analysis (CCA) is a method for searching for interactions between two data sets, the matrices X and Y. These data sets may have different numbers of features but the same number of objects. Using canonical analysis one creates a set of canonical variables / for the data set X and a set of canonical variables g for data set Y similar to the factors in factor analysis. The canonical variables / and g should have the following properties ... [Pg.179]

The normalized eigenvectors / are the canonical variables for the matrix X similar to the factor loadings in PCA. [Pg.179]

Square roots of the eigenvalues X are the canonical correlations between the canonical variables / and g. [Pg.180]

The number of canonical variables gk for the matrix Y is equal to the number of features in Y. [Pg.180]

One has now extracted synthetic variables which explain most of the interactions between the data sets X and Y. But in most cases one needs an interpretation of these canonical variables in relation to the original features ... [Pg.180]

The correlations between / and the original features in X, or between g and the original features in Y, are called intra-set loadings. They express the part of the variance which is explained by the canonical variables of its own feature set, and are also called the extraction measures. [Pg.180]

The correlations between the original features of one set and the canonical variables of the second set are called inter-set loadings. They are also redundancy measures and demonstrate the possibility of describing the first data set with the features of the second set. The inter-set loadings characterize the overlapping of both sets or, in other words, the variance of the first set explained by the second set of features. [Pg.180]

The factor structure for two canonical variables (the loadings) is given in Tab. 5-8. [Pg.180]

These contributions represent the overall correlations with each canonical feature. Again we find factor patterns which are not very pronounced. The extraction and the redundancy measures are reported in Tab. 5-9. From the total values of the variance explained we see that both sets are well represented by their canonical variables. On the other hand the redundancy measure (90% or 72%) indicates that both feature sets may be of equal practical weight. [Pg.180]

Tab. 5-9. Variance of each feature set explained by the respective canonical variable, CV, and redundancy in the respective variable (variance explained by the other feature set) ... Tab. 5-9. Variance of each feature set explained by the respective canonical variable, CV, and redundancy in the respective variable (variance explained by the other feature set) ...
By analogy with factor analysis we can now display the objects of the data set with those canonical features which extract the main portion of data variation. Fig. 5-21 shows the samples from the interlab test in the plane of the two first canonical variables. In addition to the already known special role of laboratories B and E we note some indication of the separation of the other laboratories also. [Pg.181]

Fig. 5-21. Graphical representation of fifteen objects (samples) from the interlaboratory comparison in the plane of the two first canonical variables (each derived from two feature sets)... Fig. 5-21. Graphical representation of fifteen objects (samples) from the interlaboratory comparison in the plane of the two first canonical variables (each derived from two feature sets)...

See other pages where Canonical variables is mentioned: [Pg.349]    [Pg.515]    [Pg.532]    [Pg.124]    [Pg.124]    [Pg.171]    [Pg.2]    [Pg.145]    [Pg.320]    [Pg.321]    [Pg.321]    [Pg.346]    [Pg.333]    [Pg.93]    [Pg.16]    [Pg.110]    [Pg.123]    [Pg.185]    [Pg.219]    [Pg.179]    [Pg.179]    [Pg.181]    [Pg.181]   
See also in sourсe #XX -- [ Pg.319 ]

See also in sourсe #XX -- [ Pg.195 , Pg.353 ]




SEARCH



© 2024 chempedia.info