Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Matrices dimension

Often the validity of this rule is obvious because the matrix dimensions are not conformable, but even for square matrices commutation is not allowed. [Pg.585]

C THIS SUBROUTINE PRINTS THE OCCURRENCE MATRIX DIMENSION K(162 1621 TND(162).JNO(162)... [Pg.242]

The basis for calculating the correlation between two variables xj and xk is the covariance covariance matrix (dimension m x m), which is a quadratic, symmetric matrix. The cases j k (main diagonal) are covariances between one and the same variable, which are in fact the variances o-jj of the variables Xj for j = 1,..., m (note that in Chapter 1 variances were denoted as variance—covariance matrix (Figure 2.7). Matrix X refers to a data population of infinite size, and should not be confused with estimations of it as described in Section 2.3.2, for instance the sample covariance matrix C. [Pg.53]

Compared to the sealar or semi-veetorial problem the matrix-dimension is increased by a faetor of four for a veetorial formulation, and the matrix is non-Hermitean,... [Pg.260]

Here the MATLAB on-screen term Inner matrix dimensions refers to the underlined inner size numbers of the matrix factors of D such as depicted by our underlining in 3x5 = -1 /M C 4x5- These inner dimensions are both equal to 4 for the matrix product D = A - C. [Pg.16]

The second step performs the GRID calculation with a given probe on the three subunit models. In order to make the application of Boolean operations with the map files as easy as possible, the matrix dimension of the GRID box is exactly maintained as in the largest model, i.e. that with a+fi subunits, maintaining, for both subunits, the original complex atom coordinates. The three maps obtained are named A, B and C, respectively (Fig. 7.1). [Pg.152]

An equivalent procedure can be used for chains containing centers with S> 1. Since the matrix dimension of the eigenvalue problem increases significantly with higher S, smaller chain segments are used to extrapolate to N —> oo, increasing the uncertainty of this approach. Therefore, for chains with S > 5/2 centers, it is better to use an expression derived for classical spins (i.e., spin vectors that are not quantized with respect to spatial directions) ... [Pg.91]

Figure 4. Comparison between the adiabatic and the reference SDs. a0 = 1.00, T = 300 K, y° — 0.200, co° — 5000cin. O — 150cm-1. Matrix dimension in the adiabatic method 30. Figure 4. Comparison between the adiabatic and the reference SDs. a0 = 1.00, T = 300 K, y° — 0.200, co° — 5000cin. O — 150cm-1. Matrix dimension in the adiabatic method 30.
In cases where Heisenberg Hamiltonian matrix dimensions prohibit their exact treatment one can use for energy states which are predominantly antiferromagnetic, an approximate method based on the idea of forming bonds between different lattice sites. These bonds, or rather the wave functions formed for these bonds, are then allowed to resonate giving rise to the RVB method mentioned earlier. The RVB method is very useful, since not only it makes possible a treatment of lattices which are out of reach by... [Pg.604]

The approach described above may be extended to 2-by-2-site jumps or 2-by-3-site jump as observed for 2H in, for example, thiourea-tit or DMS-rfg. From a computational point of view, these two jump processes resembles the 4-site and 6-site jump processes, respectively, regarding matrix dimensions, but the presence of two rate constants k and k2 complicates matters. So far only single axis jump processes have been described. In order to generalize to multi-axis processes the following example may be useful. A more comprehensive description of such processes may be found in the work of Kristensen et al.30/52... [Pg.114]

Simplifications can be brought about whenever the surface structure has symmetries. Point-group symmetries help moderately to reduce the matrix dimensions. On the other hand, two-dimensional periodicity can help drastically by reducing the number N to the number of atoms within a single two-dimensional unit cell with a depth perpendicular to the surface of a few times the electron mean free path. For surface crystallography this is, however, not yet sufficient, because surface structural determination requires repeating such calculations for hundreds of different geometrical models of the surface structure. [Pg.64]

Each of the five steps demonstrated different parallel efficiencies. Steps two and four were the most intensive in terms of computation. The implementation by Kuppermann and co-workers obtained better than 80% efficiency for up to 64 nodes on the Caltech/JPL Mark Illfp MIMD computer for step two.275 For large global matrix dimensions, step four was 40% efficient on 64 nodes and —80% efficient on 8 nodes. [Pg.281]

Since the data Q(p) contains some experimental error and the kernel models q(p,H) are not exact, we can expect the results, f(Hj), to be only approximate. Indeed it is a characteristic of deconvolution processes to be unstable with respect to small errors in the data. This problem can be somewhat mitigated by choice of matrix dimensions. If we consider m members of the set of H and a vector p of length n, it is clear that n 8 m must hold. If n = m, the solution vector f(Hj ) is most sensitive to imperfections in the data. For n > m, the solution is stabilized because of the additional data constraints. In this work we use an overdetermined matrix for which n > 2m. ... [Pg.73]

The matrix representing the environment of the ith atom contains in the mth row the property values P y of the atoms located at a distance equal to m from the ith atom. TTie first row collects the property values of the first neighbours of the considered atom. The P y values are listed in descending order in the first entries of the matrix the other entries are set to zero but are of no significance. The matrix dimension can be chosen for convenience. [Pg.37]

Thus, one comer remains non-shared and can be oriented either up or down relative to the plane of the sheet. The possibility of different orientations induces the appearance of orientational geometrical isomers shown in Fig. 65. By analogy with the geometrical isomers described above, the isomers under discussion can be described using u and d symbols of tetrahedra orientation. The isomer shown in Figs. 65a and b can be described as a (u)(d) isomer (its orientation matrix has the 1x2 dimensions), whereas the isomer shown in Figs. 65c and d is a (ud) isomer (matrix dimensions are 2x1). The same type of isomerism is found for the actinyl sulfate sheets shown in Fig. 66. a b... [Pg.167]

A general requirement for P-matrix analysis is n = rank(R). Unfortcmately, for most practical cases, the rank of R is greater than the number of components, i.e., rank(R) > n, and rank(R) = min(m, p). Thus, P-matrix analysis is associated with the problem of substituting R with an R that produces rank(R ) = n. This is mostly done by orthogonal decomposition methods, such as principal components analysis, partial least squares (PLS), or continuum regression [4]. Dimension requirements of involved matrices for these methods are m > n, and p > n. If the method of least squares is used, additional constraints on matrix dimensions are needed [4]. The approach of P-matrix analysis does not require quantitative concentration information of all constituents. Specifically, calibration samples with known concentrations of analytes under investigation satisfy the calibration needs. The method of PLS will be used in this chapter for P-matrix analysis. [Pg.27]

C0MM0N/0NE/ X(27),XMAX(27),XMIN(27), 1DELTAX(27),DELMIN(27),MASK(27),NV, 2NTRACE,MATRIX DIMENSION ERR(27,3)... [Pg.78]

In defining the harmonic oscillator basis by Eqs. (5.23)—(5.25) the problem of scaling was postponed. Equation (5.27) shows that the three yet undefined parameters, D, E° and a, are interrelated so that they are all determined when anyone has been given a value. A reasonable estimate is most easily obtained for the spacing E°, which should be close to the mean spacing of the levels considered, in order to minimize the dimensions of the Hamilton matrix. Thus, in the present example it was found that with E° = 80 cm-1 the basis could safely be truncated at n = 39 corresponding to matrix dimensions 20 x 20 for the diagonal blocks of H. ... [Pg.164]

The elements of the first square matrix (dimensions nxn) are the differentials dcji / 3Q / = 1,2, - , n) of the stationary phase concentration of one sample component with respect to the mobile phase concentration of another sample component. Since in analytical chromatography all the sample components are present only at infinite dilution, differentiation of Eqs. 13.3 gives cross-partial differentials that are zero. Only the diagonal elements of this matrix are different from 0. These nonzero elements are equal to a,/(1 -I- EjhjCj) = so are simply related to... [Pg.613]

Exactly what kind of evidence of validation is required How much evidence is sufficient to establish clear control These questions can be answered through an examination of two dimensions. Validation evidence falls into six broad issue categories further defined by two cross-matrices of risk and application. Before defining these two cross-matrix dimensions, though, a detailed description of the issue categories will be helpful. [Pg.197]

As the dimension of the blocks of the Hessian matrix increases, it becomes more efficient to solve for the wavefunction corrections using iterative methods instead of direct methods. The most useful of these methods require a series of matrix-vector products. Since a square matrix-vector product may be computed in 2N arithmetic operations (where N is the matrix dimension), an iterative solution that requires only a few of these products is more efficient than a direct solution (which requires approximately floating-point operations). The most stable of these methods expand the solution vector in a subspace of trial vectors. During each iteration of this procedure, the dimension of this subspace is increased until some measure of the error indicates that sufficient accuracy has been achieved. Such iterative methods for both linear equations and matrix eigenvalue equations have been discussed in the literature . [Pg.185]

M is a graph-theoretical matrix, n the matrix dimension, c Ch(M x))j the ith coefficient of the characteristic polynomial of M, A(M) indicates the graph spectrum (i.e., the set of eigenvalues of M), and a and X are real parameters. In function VSj(M) is the ith matrix row sum, Kthe total number of selected graph fragments, and the number of vertices in the fcth fragment. o,j indicates the elements of the adjacency matrix that are equal to 1 for pairs of adjacent vertices and zero otherwise. [Pg.347]


See other pages where Matrices dimension is mentioned: [Pg.2213]    [Pg.127]    [Pg.143]    [Pg.144]    [Pg.316]    [Pg.27]    [Pg.88]    [Pg.177]    [Pg.29]    [Pg.16]    [Pg.117]    [Pg.262]    [Pg.152]    [Pg.105]    [Pg.64]    [Pg.367]    [Pg.371]    [Pg.383]    [Pg.613]    [Pg.613]    [Pg.264]    [Pg.243]    [Pg.148]    [Pg.104]    [Pg.119]    [Pg.185]   
See also in sourсe #XX -- [ Pg.8 ]

See also in sourсe #XX -- [ Pg.8 ]




SEARCH



© 2024 chempedia.info