Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sparse symmetric matrix

The edge-Szeged matrix, denoted as SZ, is a sparse symmetric matrix whose off-diagonal elements different from zero are only those corresponding to pairs of adjacent vertices (i.e.,... [Pg.793]

In the BzzMath library, the BzzMatrixSparseSyinitietricLocked class can be used to collect a sparse symmetric matrix. [Pg.154]

Objects in the BzzMatrixSparseSymmetricLocked class collect the data for a sparse symmetric matrix as follows. [Pg.154]

The Cluj-distance index CJD is obtained from the symmetric Cluj-distance matrix CJD as the sum of the matrix entries corresponding to all the P order paths (i.e. bonds) above the main diagonal, i.e. applying the Wiener operator "iPto the P order sparse symmetric Cluj-distance matrix CJD or from the sparse unsymmetric Cluj-distance matrix CJDjj applying the Wiener orthogonal operator "W ... [Pg.73]

The Cluj-detour index CJA is analogously obtained from the P order sparse symmetric Cluj-detour matrix CJA applying the Wiener operator IV or from the unsymmetric Cluj-detour matrix CJAu applying the Wiener orthogonal operator VF ... [Pg.73]

The order sparse symmetric Szeged matrix SZ is obtained as ... [Pg.439]

Calculating the matrix elements of the Hamiltonian in this basis set gives a sparse, real, and symmetric M(N) x M(N) matrix at order N. By systematically increasing the order N, one obtained the lowest two eigenvalues at different basis lengths M(N). For example, M(N) = 946 and 20,336 at N = 20 and 60, respectively [11]. The symmetric matrix is represented in a sparse row-wise format [140] and then reordered [141] before triangularizations. The Lanczos method [142] of block-renormalization procedure was employed. [Pg.47]

The Wiener matrix (Todeschini and Consonni, 2000, 2009), also called the edge-Wiener matrix (Devillers and Balaban, 1999) and denoted by AV, was introduced for acyclic graphs (Raudic, 1993). It is a sparse symmetric square VxVmatrix whose elements are defined as... [Pg.115]

Note that in equation system (2.64) the coefficients matrix is symmetric, sparse (i.e. a significant number of its members are zero) and banded. The symmetry of the coefficients matrix in the global finite element equations is not guaranteed for all applications (in particular, in most fluid flow problems this matrix will not be symmetric). However, the finite element method always yields sparse and banded sets of equations. This property should be utilized to minimize computing costs in complex problems. [Pg.48]

Like the time propagation, the major computational task in Chebyshev propagation is repetitive matrix-vector multiplication, a task that is amenable to sparse matrix techniques with favorable scaling laws. The memory request is minimal because the Hamiltonian matrix need not be stored and its action on the recurring vector can be generated on the fly. Finally, the Chebyshev propagation can be performed in real space as long as a real initial wave packet and real-symmetric Hamiltonian are used. [Pg.310]

The matrix elements Hkk follow directly from (1) and correspond to directional cosines in a vector space. Transfers between adjacent sites are proportional to tpq. The local structure of VB diagrams limits the outcome to possibilities for Hkki that can readily be enumerated [13]. Spin problems in the covalent basis have even simpler[28] Hkki. The matrix H is not symmetric when the basis is not orthogonal, but it is extremely sparse. This follows because N sites yield about N bonds and each transfer integral gives at most two diagrams. There are consequently 2N off-diagonal Hkk> in matrices of order Ps N, Ne)/4 for systems with inversion and... [Pg.649]

The expression is easily coded, since T<0>, Eq. 41, and the r 1 are known. It simplifies for substitution on a principal plane or axis and for symmetrically equivalent multiple substitution, because several of the elements of the matrix T will then vanish. It is clear from Eq. 45 that the derivatives are nonvanishing only for those atoms a that have actually been substitued in the particular isotopomer s. Therefore, the Jacobian matrix X generated from these derivatives is, in general, a sparse matrix. [Pg.83]

The new coefficient matrix is symmetric as M lA can be written as M 1/2AM 1/Z. Preconditioning aims to produce a more clustered eigenvalue structure for M A and/or lower condition number than for A to improve the relevant convergence ratio however, preconditioning also adds to the computational effort by requiring that a linear system involving M (namely, Mz = r) be solved at every step. Thus, it is essential for efficiency of the method that M be factored very rapidly in relation to the original A. This can be achieved, for example, if M is a sparse component of the dense A. Whereas the solution of an n X n dense linear system requires order of 3 operations, the work for sparse systems can be as low as order n.13-14... [Pg.33]

The 1 order -> sparse matrix of the symmetric Cluj-distance matrix CJD is calculated by the Hadamard matrix product applied to the CJD matrix and the - adjacency matrix A, as ... [Pg.73]

For volume conductor problems, A contains aU of the geometry and conductivity information of the model. The matrix A is symmetric and positive definite thus, it is nonsingular and has a unique solution. Because the basis function differs from zero for only a few intervals, A is sparse (only a few of its entries are nonzero). [Pg.377]

Iterative algorithms are recommended for some linear systems Ax = b as an alternative to direct algorithms. An iteration usually amounts to one or two multiplications of the matrix A by a vector and to a few linear operations with vectors. If A is sparse, small storage space suffices. This is a major advantage of iterative methods where the direct methods have large fill-in. Furthermore, with appropriate data structures, arithmetic operations are actually performed only where both operands are nonzeros then, D A) or 2D A) flops per iteration and D(A) + 2n units of storage space suffice, where D(A) denotes the number of nonzeros in A. Finally, iterative methods allow implicit symmetrization, when the iteration applies to the symmetrized system A Ax = A b without explicit evaluation of A A, which would have replaced A by less sparse matrix A A. [Pg.194]


See other pages where Sparse symmetric matrix is mentioned: [Pg.154]    [Pg.155]    [Pg.154]    [Pg.155]    [Pg.650]    [Pg.55]    [Pg.26]    [Pg.73]    [Pg.439]    [Pg.439]    [Pg.379]    [Pg.51]    [Pg.2534]    [Pg.381]    [Pg.193]    [Pg.411]    [Pg.395]    [Pg.203]    [Pg.138]    [Pg.324]    [Pg.177]    [Pg.91]    [Pg.22]    [Pg.70]    [Pg.371]    [Pg.497]    [Pg.220]    [Pg.220]    [Pg.80]    [Pg.20]    [Pg.309]    [Pg.192]    [Pg.193]   
See also in sourсe #XX -- [ Pg.163 ]




SEARCH



Collecting a Sparse Symmetric Matrix

Matrix symmetrical

Sparse

Sparse matrix

Symmetric matrix

Symmetric positive definite matrix sparse

© 2024 chempedia.info