Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Covariant representation

Following the common approach in relativistic field theory, which aims at a manifestly covariant representation of the dynamics inherent in the field operators, so far all quantities have been introduced in the Heisenberg picture. To develop the framework of relativistic DFT, however, it is common practice to transform to the Schrodinger picture, so that the relativistic theory can be formulated in close analogy to its nonrelativistic limit. As usual we choose the two pictures to coincide at = 0. Once the field operators in the Schroodinger-picture have been identified via j/5 (x) = tj/(x, = 0), etc, the Hamiltonians He,s, Hy s and are immediately obtained in terms of the Schrodinger-picture field operators. [Pg.231]

In this section we describe the general approach to constructing conformally invariant ansatzes applicable to any (linear or nonlinear) system of partial differential equations, on whose solution set a linear covariant representation of the conformal group 0(1,3) is realized. Since the majority of the equations of the relativistic physics, including the Klein-Gordon-Fock, Maxwell, massless Dirac, and Yang-Mills equations, respect this requirement, they can be handled within the framework of this approach. [Pg.275]

Furthermore, the general method presented in this chapter applies directly to solving the full Maxwell equations with currents. It can also be used to construct exact classical solutions of Yang-Mills equations with Higgs fields and their generalizations. Generically, the method developed in this chapter can be efficiently applied to any conformally invariant wave equation, on the solution set of which a covariant representation of the conformal algebra in Eq. (15) is realized. [Pg.349]

The plan of the chapter is as follows. The next section provides a very brief overview of the CIS and RPA theories in the matrix-covariant representation ensuring basis-independent formulation (for the Ml AO or MO sets). Section 14.3... [Pg.416]

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

Consider next the current operator ju(x). The correspondence principle suggests that its form is j ( ) = — (e/2)( ( )yB, (a )]. Such a form foijn(x) does not satisfy Eq. (11-477). In fact, due to covariance, the spectral representation of the vacuum expectation value of 8u(x)Av(x), where ( ) is an arbitrary four-vector, is given by... [Pg.704]

Co-representation matrices explicit forms, 733 multiplication of, 731 of the nonunitary group, 732 Corliss, L. M., 757 Corson, E. M., 498 Coulomb field Dirac equation in, 637 Coulomb gauge, 643,657,664 Counting functions, 165 Covariance matrix, 160 Covariant amplitude of one-particle system, 511 of one, two, etc. particle systems, 511... [Pg.771]

LCA and CCK, on the other hand, appear to be strikingly dissimilar. All CCK procedures require at least one quasi-continuous indicator, and if there are none, the investigator has to create such an indicator (e.g., SSMAXCOV procedure). In contrast, LCA does not require continuous indicators and only deals with categorical data. In the case of categorical data, the patterns of interest are usually apparent, so there is no need to summarize the data with correlations. Therefore, LCA evaluates cross-tabulations and compares the number of cases across cells. This shift in representation of the data necessitates other basic changes. For example, LCA operates with proportions instead of covariances and yields tables rather than plots. These differences aside, the two approaches share a lot in common. LCA, like CCK, starts with a set of correlated indicators. It also makes the assumption of zero nuisance covariance-—in the LCA literature this is called the assumption of local independence, and it means that the indicators are presumed to be independent (i.e., uncorrelated) within latent classes. Moreover, LCA and CCK (MAXCOV in particular) use similar procedures for group assignment and both of them involve Bayes s theorem. [Pg.90]

Representations of these and other tensors in an arbitrary system of coordinates may be constructed as follows. For each contravariant rank 2 Cartesian tensor T " (such as H ) or covariant tensor S v (such as m v), we define corresponding Riemannian representations... [Pg.71]

These are covariant and contravariant representations of the Cartesian identity tensor, and inverses of each other. [Pg.72]

In this section, we develop some useful relationships involving the determinants and inverses of projected tensors. Let S ap be the Riemannian representation of an arbitrary symmetric covariant tensor with a Cartesian representation S v We may write the Riemannian representation in block matrix form, using the indices a,b to denote blocks in which a or p mns over the soft coordinates and i,j to represent hard coordinates, as... [Pg.171]

The only common factor is that the charge-current 4-tensor transforms in the same way. The vector representation develops a time-like component under Lorentz transformation, while the tensor representation does not. However, the underlying equations in both cases are the Maxwell-Heaviside equations, which transform covariantly in both cases and obviously in the same way for both vector and tensor representations. [Pg.261]

In Eq. (5), the product q q is quaternion-valued and non-commutative, but not antisymmetric in the indices p and v. The B<3> held and structure of 0(3) electrodynamics must be found from a special case of Eq. (5) showing that 0(3) electrodynamics is a Yang-Mills theory and also a theory of general relativity [1]. The important conclusion reached is that Yang-Mills theories can be derived from the irreducible representations of the Einstein group. This result is consistent with the fact that all theories of physics must be theories of general relativity in principle. From Eq. (1), it is possible to write four-valued, generally covariant, components such as... [Pg.471]

From the irreducible representations of the Einstein group, there exist 4-vectors that are generally covariant and take the following form ... [Pg.483]

So one can consider V as the eigenvector representation of E and QT as the eigenvector representation of C. Unfortunately, these matrices in their present form have little physical significance. The matrix QT whose rows represent eigenvectors of the covariance matrix must necessarily be orthogonal to each other and therefore must be negative at some points. The concept of a negative concentration is novel but unacceptable. [Pg.106]

Similarly, in equation (6.9) for the covariances the matrices A and B are now time-dependent. We define the interaction representation by setting... [Pg.213]

This is transfer covariant if all quadratically integrable functions are represented in the same orbital basis. Requiring fps to be orthogonal to all radial factor riPa(r)) enforces a unique representation, but introduces Lagrange multipliers in the close-coupling equations. An alternative is to require... [Pg.146]

By relation, PLS is similar to PCR. Both decompose the A-data into a smaller set of variables, i.e., the scores. However, they differ in how they relate the scores to the Y-data. In PCR, the scores from the PCA decomposition of the A -data are regressed onto the Y-data. In contrast, PLS decomposes both the Y- and the A -data into individual score and loading matrices. The orthogonal sets of scores for the X- and Y-data, T, U, respectively, are generated in a way that maximizes their covariance. This is an attractive feature, particularly in situations where not all the major sources of variability in X are correlated to the variability in Y. PLS attempts to find a different set of orthogonal scores for the A -data to give better predictions of the Y-data. Thus, orthogonal vectors may yield a poorer representation of the A-data, while the scores may yield a better prediction of Y than would be possible with PCR. [Pg.36]

The gradient expression given above is not particularly useful since it appears in the MO basis. Following the discussion in Appendix C about covariant and contravariant representations, we may rewrite the gradient as... [Pg.197]

In the first expression the integrals are in the covariant AO representation (in which they are calculated), and the one-index transformed density elements are in the contravariant representation (obtained from the MO basis in usual one- and two-electron transformations). The second expression is useful whenever the transformation matrix is calculated directly in the covariant AO representation and requires the transformation of the Fock matrix to the contravariant representation. The last expression is convenient when the number of perturbations is large, since it avoids the transformation of the covariant AO Fock matrix to the MO or contravariant AO representations. [Pg.241]


See other pages where Covariant representation is mentioned: [Pg.392]    [Pg.315]    [Pg.337]    [Pg.392]    [Pg.315]    [Pg.337]    [Pg.152]    [Pg.274]    [Pg.36]    [Pg.256]    [Pg.341]    [Pg.155]    [Pg.378]    [Pg.41]    [Pg.212]    [Pg.90]    [Pg.99]    [Pg.459]    [Pg.469]    [Pg.27]    [Pg.124]    [Pg.147]    [Pg.185]    [Pg.11]    [Pg.24]    [Pg.197]    [Pg.198]    [Pg.234]   
See also in sourсe #XX -- [ Pg.234 , Pg.235 ]




SEARCH



Covariance

Covariant

Covariant integral representation

Covariates

Covariation

© 2024 chempedia.info