Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sparse Hamiltonian matrices

As indicated by the Kronecker deltas in the above equation, the resulting Hamiltonian matrix is extremely sparse and its action onto a vector can be readily computed one term at a time.12,13 This property becomes very important for recursive diagonalization methods, which rely on matrix-vector multiplication ... [Pg.288]

Like the time propagation, the major computational task in Chebyshev propagation is repetitive matrix-vector multiplication, a task that is amenable to sparse matrix techniques with favorable scaling laws. The memory request is minimal because the Hamiltonian matrix need not be stored and its action on the recurring vector can be generated on the fly. Finally, the Chebyshev propagation can be performed in real space as long as a real initial wave packet and real-symmetric Hamiltonian are used. [Pg.310]

Therefore, the Hamiltonian matrix tends to be rather sparse, especially as the number of configurations included in the wavefunction increases. [Pg.15]

Since the Hamiltonian is both spin- and symmetry-independent, the Cl expansion only contains configurations that are of the spin and symmetry of interest. Even taking advantage of the spin, symmetry, and sparseness of the Hamiltonian matrix, we may nonetheless be left with a matrix of a size well beyond our computational resources. [Pg.15]

In recent years the solution of problems of large amplitude motions (LAM s) has usually been based on grid representations, such as DVR,[11, 12] of the Hamiltonians coupled with solution by sequential diagonalization and truncation (SDT[13, 9]) of the basis or by Lanczos[2] or other iterative nicthods[14]. More recently, filter diagonalization (ED) [5, 4] and spectral transforms of the iterative operator[15] have also been used. There has usually been a trade-off between the use of a compact basis with a dense Hamiltonian matrix, or a simple but very large D R with a sparse H and a fast matrix-vector product. [Pg.232]

Fig. 10.7. The block structure of the Hamiltonian matrix (H) is the lesuk of the Slater-Condon rules (see Appendix M available at booksite.eIsevier.com/978-0-444-59436-5, p. el 19). S indicates single excitations, D indicates double excitations, T indicates triple excitations, and Q indicates quadruple excitations, (a) A block of zero values due to the Brillouin theorem, (b) A block of zero values due to the fourth Slater-Condon rule, (11) the nonzero block obtained according to the second and third Slater-Condon rules, (111) the nonzero block obtained according to the third Slater-Condon rule. All the nonzero blocks are sparse matrices dominated by zero values, which is important in the diagonalization process. Fig. 10.7. The block structure of the Hamiltonian matrix (H) is the lesuk of the Slater-Condon rules (see Appendix M available at booksite.eIsevier.com/978-0-444-59436-5, p. el 19). S indicates single excitations, D indicates double excitations, T indicates triple excitations, and Q indicates quadruple excitations, (a) A block of zero values due to the Brillouin theorem, (b) A block of zero values due to the fourth Slater-Condon rule, (11) the nonzero block obtained according to the second and third Slater-Condon rules, (111) the nonzero block obtained according to the third Slater-Condon rule. All the nonzero blocks are sparse matrices dominated by zero values, which is important in the diagonalization process.
However, the Hamiltonian matrix Hst is too large to be stored explicitly, e.g. elements if the Cl wavefunction is limited to single and double excitations. Furthermore, Hst is sparse with only ( (O V N ) non-zero matrix elements. Hence the equations for Cl and related methods are solved iteratively, with the sparsity of Hst explicitly taken into account in the formation of the product X Hst at-... [Pg.25]

Although the ID DVR s are useful, die use of direct product DVR s ftn multidimensional problems is much mhighly advantageous. There are three reasons for this. First the Hamiltonian matrix in the muld-dimensional DVR is easy to ctmstruct Secmid, for a DVR in an ordumormal coordinate system, the Hamiltonian is sparse. Third, die "low... [Pg.190]

Because of the spaiseness god the structure of the Hamiltonian matrix in a direct product DVR, solutions of both time dependent and time independent (eigenvalue) problems are made much more efficient compared with standard apmoaches. The two features exploited in the time dependent problems are the sparseness of H and the fact that the kinetic energy operators couple only one dimension at a time. This latter feature is exploited in the solution of time independent problems by sequential diagonalization and truncation in which "adiabatic" eigenvectors in lower dimensions are recoupled (exactly, within the basis) in the higher dimensions after truncation. We turn first to the time dependent problems. [Pg.192]

The efficiency supplied by the Davidson method is that the main work is in the matrix-vector multiplications, which scales as M, rather than the of direct diag-onalization. The biggest problem is the storage of the Hamiltonian matrix, which can be written to disk and read in row by row, or in batches of matrix elements if it is sparse. Thus, we do not need to keep the Hamiltonian matrix in memory to obtain its eigenvectors. [Pg.223]

It was not long before it was realized that even evaluating and storing the Hamiltonian placed too severe limits on the size of the Cl expansion. For a million determinants, the number of Hamiltonian matrix elements is 10 —far too large to store on disk and read in each iteration, even if the matrix was sparse and the sparsity could be exploited. The Hamiltonian is, however, composed of integrals over the molecular... [Pg.223]

Forming the matrix representation of the Hamiltonian operator and manipulating the Hamiltonian matrix to obtain the observable of interest can be computationally intensive. A discrete variable representation [53-55] (DVR) can ameliorate both of these difficulties. That is, the construction of the Hamiltonian matrix is particularly simple in a DVR because no multidimensional integrals involving the potential function are required. Also, the resulting matrix is sparse because the potential is diagonal, which expedites an iterative solution [37, 38]. In the present research we use a sinc-function based DVR (vide infra) first developed by Colbert and Miller [56] for use in the 5—matrix version of the Kohn variational principle [57, 58], and used subsequently for 5—matrix calculations [37, 38] in addition to N(E) calculations [23]. This is a uniform grid DVR which is constructed from an infinite set of points. It is... [Pg.43]

In multidimensional SDVR the Hamiltonian matrix is sparse, which leads to iV(2 + multiplications for each application of the matrix M, where N is the size of the grid [37, 38, 56]. [Pg.53]


See other pages where Sparse Hamiltonian matrices is mentioned: [Pg.319]    [Pg.325]    [Pg.3167]    [Pg.8]    [Pg.290]    [Pg.77]    [Pg.23]    [Pg.573]    [Pg.600]    [Pg.615]    [Pg.54]    [Pg.133]    [Pg.139]    [Pg.140]    [Pg.220]    [Pg.256]    [Pg.264]    [Pg.113]    [Pg.220]    [Pg.83]    [Pg.89]    [Pg.282]    [Pg.310]    [Pg.319]    [Pg.401]    [Pg.665]    [Pg.135]    [Pg.145]    [Pg.147]    [Pg.159]    [Pg.20]    [Pg.309]    [Pg.214]    [Pg.169]    [Pg.98]    [Pg.99]    [Pg.144]    [Pg.325]    [Pg.257]   
See also in sourсe #XX -- [ Pg.319 ]




SEARCH



Sparse

Sparse matrix

© 2024 chempedia.info