Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Vectors matrix multiplication

Large stepsizes result in a strong reduction of the number of force field evaluations per unit time (see left hand side of Fig. 4). This represents the major advantage of the adaptive schemes in comparison to structure conserving methods. On the right hand side of Fig. 4 we see the number of FFTs (i.e., matrix-vector multiplication) per unit time. As expected, we observe that the Chebyshev iteration requires about double as much FFTs than the Krylov techniques. This is due to the fact that only about half of the eigenstates of the Hamiltonian are essentially occupied during the process. This effect occurs even more drastically in cases with less states occupied. [Pg.407]

This construction requires one matrix-vector multiplication with S and two inner products in each recursive step. Therefore, it is not necessary to store S explicitly as a matrix. The Lanczos process yields the approximation [21, 7, 12]... [Pg.430]

The rules of matrix-vector multiplication show that the matrix form is the same as the algebraic form, Eq. (5-25)... [Pg.138]

As indicated by the Kronecker deltas in the above equation, the resulting Hamiltonian matrix is extremely sparse and its action onto a vector can be readily computed one term at a time.12,13 This property becomes very important for recursive diagonalization methods, which rely on matrix-vector multiplication ... [Pg.288]

PIST distinguishes itself from other spectral transform Lanczos methods by using two important innovations. First, the linear equation Eq. [38] is solved by QMR but not to a high degree of accuracy. In practice, the QMR recursion is terminated once a prespecified (and relatively large) tolerance is reached. Consequently, the resulting Lanczos vectors are only approximately filtered. This inexact spectral transform is efficient because many less matrix-vector multiplications are needed, and its deficiencies can subsequently... [Pg.302]

Like the time propagation, the major computational task in Chebyshev propagation is repetitive matrix-vector multiplication, a task that is amenable to sparse matrix techniques with favorable scaling laws. The memory request is minimal because the Hamiltonian matrix need not be stored and its action on the recurring vector can be generated on the fly. Finally, the Chebyshev propagation can be performed in real space as long as a real initial wave packet and real-symmetric Hamiltonian are used. [Pg.310]

Because of round-off errors, symmetry contamination is often present even when the initial vector is properly symmetrized. To circumvent this problem, an effective scheme to reinforce the symmetry at every Lanczos recursion step has been proposed independently by Chen and Guo100 and by Wang and Carrington.195 Specifically, the Lanczos recursion is executed with symmetry-adapted vectors, but the matrix-vector multiplication is performed at every Lanczos step with the unsymmetrized vector. In other words, the symmetrized vectors are combined just before the operation Hq, and the resultant vector is symmetrized using the projection operators ... [Pg.322]

Effectively, vector r has 3 x n x 1 components since each r, in (47) is itself a three-dimensional vector. Technically speaking, in place of Ak in (46), one should write the Kronecker product A with being the 3 x 3 identity matrix. However, to simplify notations and avoid writing routinely this obvious Kronecker product, below in this section we will be using the following convention for matrix-vector multiplications involving such vectors ... [Pg.398]

In either case, carrying out the matrix-vector multiplication reveals the meaning of the stress vector as... [Pg.754]

Since a matrix with n columns may be considered as composed of n column vectors written side by side as in Eq. (39), the matrix-matrix multiplication needed in Eq. (42) and later may be treated as repeated matrix-vector multiplication. The product of two n Xn matrices is another n X n matrix since each matrix-vector multiplication produces another vector. [Pg.226]

To calculate numerically the quantum dynamics of the various cations in time-dependent domain, we shall use the multiconfiguration time-dependent Hartree method (MCTDH) [79-82, 113, 114]. This method for propagating multidimensional wave packets is one of the most powerful techniques currently available. For an overview of the capabilities and applications of the MCTDH method we refer to a recent book [114]. Additional insight into the vibronic dynamics can be achieved by performing time-independent calculations. To this end Lanczos algorithm [115,116] is a very suitable algorithm for our purposes because of the structural sparsity of the Hamiltonian secular matrix and the matrix-vector multiplication routine is very efficient to implement [6]. [Pg.249]

Figure 1.51. The augmented 4x4 matrix, which combines both the rotational and translational parts as indicated by thick boxes and the added row highlighted by the box drawn using dashed lines (left) and the corresponding modification of the original vector to ensure their compatibility in the matrix-vector multiplication (right). Figure 1.51. The augmented 4x4 matrix, which combines both the rotational and translational parts as indicated by thick boxes and the added row highlighted by the box drawn using dashed lines (left) and the corresponding modification of the original vector to ensure their compatibility in the matrix-vector multiplication (right).
Triplet topological indices were proposed based on a general matrix-vector multiplication approach and several combined descriptors are combinations of existing descriptors. [Pg.348]

Vector-matrix-vector multiplication —> graph invariants... [Pg.857]

Formation of the ll2ph-2h 2ph product is the most difficult phase of the matrix-vector multiplication. Spin adaptation leads to the following structure ... [Pg.115]


See other pages where Vectors matrix multiplication is mentioned: [Pg.259]    [Pg.364]    [Pg.286]    [Pg.295]    [Pg.299]    [Pg.300]    [Pg.301]    [Pg.321]    [Pg.330]    [Pg.608]    [Pg.398]    [Pg.117]    [Pg.162]    [Pg.87]    [Pg.49]    [Pg.142]    [Pg.143]    [Pg.282]    [Pg.187]    [Pg.140]    [Pg.196]    [Pg.225]    [Pg.233]    [Pg.217]    [Pg.220]    [Pg.222]    [Pg.29]    [Pg.84]    [Pg.348]    [Pg.112]    [Pg.115]    [Pg.102]   
See also in sourсe #XX -- [ Pg.288 ]

See also in sourсe #XX -- [ Pg.117 ]




SEARCH



Matrix multiplication

Matrix-vector multiplication, parallel

Vector matrices

Vectors multiplication

Vectors vector multiplication

© 2024 chempedia.info