Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Covariant basis vectors

For each coordinate 2 in the full space, we may define a covariant basis vector 0R /02 and a contravariant basis vector 02 /0R, which obey orthogonality and completeness relations... [Pg.69]

To prove Eq. (2.182), we start with the Cartesian divergence, and expand in covariant basis vectors, and the Cartesian gradient in covariant basis vectors, to obtain... [Pg.180]

The expression (18) features in the calculation of surface gradients. (An alternative derivation for the normal is also available through n = ti x t2.) It is useful to introduce the covariant basis vectors and in terms of the contravariant ones as follows ... [Pg.46]

Any 37/-dimensional Cartesian vector that is associated with a point on the constraint surface may be divided into a soft component, which is locally tangent to the constraint surface and a hard component, which is perpendicular to this surface. The soft subspace is the /-dimensional vector space that contains aU 3N dimensional Cartesian vectors that are locally tangent to the constraint surface. It is spanned by / covariant tangent basis vectors... [Pg.70]

A generalized set of reciprocal vectors for a constrained system is defined here to be any set off contravariant basis vectors b, ..., b- and K covariant basis... [Pg.110]

In Cartesian coordinates the position vector (C.56) is expressed in terms of the unit base vectors ex,ey,Gz, hence a position vector increment dr between two infinitely close points 3delds dr = dxe + dyOy + dzBz The base vectors ga in the curvilinear system, called the natural basis of the curvilinear system (also called covariant base vectors), is defined such that the same position vector increment dr is given in terms of the curvilinear increments dga by dr = dgaGa- The distance element in curvilinear coordinate systems is then computed as the square of the element of arc length between the two infinitely close points ... [Pg.1163]

The root of these methods is the decomposition of multivariate data into a series of orthogonal factors, eigenvectors, also called abstract factors. These factors are linear combinations of a set of orthogonal basis vectors that are the eigenvectors of the variance-covariance matrix (X X) of the original data matrix. The eigenvalues of this variance-covariance matrix are the solutions Kz,. . ., of the determinantal equation... [Pg.175]

The two transformations in (10.2.2) are said to be contragredient, the first (typical of basis vectors) being covariant and the second (typical of vector components) being contravariant. The relationship is clearly reflexive in the sense that if we put R = U then R = tl and we can just as well write (10.2.2) as... [Pg.329]

The superscript T indicates matrix transposition. S = FF is the two-point correlation matrix of the basis vectors only this parameter appears in the solution and not the basis vectors themselves. The nonlinearity of the problem is taken into account through iterative application of Eq. (8.2.8). The error covariance matrix for the retrieval temperature profile due to instrument noise propagation is... [Pg.357]

The generalized Fisher theorems derived in this section are statements about the space variation of the vectors of the relative and absolute space-specific rates of growth. These vectors have a simple natural (biological, chemical, physical) interpretation They express the capacity of a species of type u to fill out space in genetic language, they are space-specific fitness functions. In addition, the covariance matrix of the vector of the relative space-specific rates of growth, gap, [Eq. (25)] is a Riemannian metric tensor that enters the expression of a Fisher information metric [Eqs. (24) and (26)]. These results may serve as a basis for solving inverse problems for reaction transport systems. [Pg.180]

The extraction of the eigenvectors from a symmetric data matrix forms the basis and starting point of many multivariate chemometric procedures. The way in which the data are preprocessed and scaled, and how the resulting vectors are treated, has produced a wide range of related and similar techniques. By far the most common is principal components analysis. As we have seen, PCA provides n eigenvectors derived from a. nx n dispersion matrix of variances and covariances, or correlations. If the data are standardized prior to eigenvector analysis, then the variance-covariance matrix becomes the correlation matrix [see Equation (25) in Chapter 1, with Ji = 52]. Another technique, strongly related to PCA, is factor analysis. ... [Pg.79]

Requiring these order parameters to transform in a Lorentz-covariant way, we are led to a particular basis of 4 x 4 matrices , which was recently derived in detail (Capelle and Gross 1999a). The resulting order parameters represent a Lorentz scalar (one component), a four vector (four components), a pseudo scalar (one component), an axial four vector (four components), and an antisymmetric tensor of rank two (six independent components). This set of 4 x 4 matrices is different from the usual Dirac y matrices. The latter only lead to a Lorentz scalar, a four vector, etc., when combined with one creation and one annihilation operator, whereas the order parameter consists of two annihilation operators. [Pg.172]

Consider a 3-D domain that can be adequately described by the generalized curvilinear coordinate system (u, v, w) and that its mappings are adequately smooth to allow consistent definitions. Then, any vector F can be decomposed into three components with respect to the contravariant a , a , a or the covariant a , a, a,a, linearly independent basis system as... [Pg.75]

Now, let us choose n — h linearly independent covariant vectors gP as basis in the reaction subspace V and show that n - /j is the number of independent chemical reactions in the mixture, cf. below (4.45) and Rem. 4. These vectors can be written in the basis ofW as... [Pg.153]

On the other hand, the reaction rate Jr r = 1/t) may be obtained by multiplying (4.43) with vectors of contravariant basis gr (see (A.89)). Inserting in such product from (4.33), from the relation between contra- and covariant bases in V (see (A.86)) and from (4.40), we obtain (by using of orthonormality of ) the relation between rates (reversal to (4.44))... [Pg.154]

In Seet. 4.2, we need veetor spaee with abasis whieh is formed by A linear independent vectors gp p =, ..., k) which are not generally perpendicular or of unit length [12, 18, 19]. Sueh nonorthogonal basis, we eall a contravariant one. Covariant components of the so called metric tensor are defined by... [Pg.295]

Using Equations (6) and (5), the covariant tangent base vector in equation (4) can be rewritten with respect to the basis g, g as ... [Pg.2220]


See other pages where Covariant basis vectors is mentioned: [Pg.1158]    [Pg.1441]    [Pg.44]    [Pg.44]    [Pg.1158]    [Pg.1441]    [Pg.44]    [Pg.44]    [Pg.80]    [Pg.372]    [Pg.372]    [Pg.295]    [Pg.366]    [Pg.375]    [Pg.75]    [Pg.2746]    [Pg.238]    [Pg.238]    [Pg.146]    [Pg.1657]    [Pg.409]    [Pg.129]    [Pg.36]    [Pg.53]    [Pg.51]    [Pg.117]    [Pg.453]    [Pg.89]    [Pg.218]   
See also in sourсe #XX -- [ Pg.44 ]




SEARCH



Basis vector

Covariance

Covariant

Covariant vector

Covariates

Covariation

© 2024 chempedia.info