Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Distributed memory algorithms

The four-index transformation is a good test case for parallel algorithm development of electronic structure calculations, because it has O(N ) operations, a low computation to data transfer ratio and is a compact piece of code. Distributed-memory algorithms were presented for a number of standard QC methods by Whiteside and co-workers Li52 special emphasis on the integral transformation. Details of their implementation on a 32-processor Intel hypercube were provided. [Pg.253]

This completes the outline of FAMUSAMM. The algorithm has been implemented in the MD simulation program EGO VIII [48] in a sequential and a parallelized version the latter has been implemented and tested on a number of distributed memory parallel computers, e.g., IBM SP2, Cray T3E, Parsytec CC and ethernet-linked workstation clusters running PVM or MPI. [Pg.83]

The complexity analysis shows that the load is evenly balanced among processors and therefore we should expect speedup close to P and efficiency close to 100%. There are however few extra terms in the expression of the time complexity (first order terms in TV), that exist because of the need to compute the next available row in the force matrix. These row allocations can be computed ahead of time and this overhead can be minimized. This is done in the next algorithm. Note that, the communication complexity is the worst case for all interconnection topologies, since simple broadcast and gather on distributed memory parallel systems are assumed. [Pg.488]

Rendell et al. compared three previously reported algorithms to the fourth-order triple excitation energy component in MBPT." The authors investigated the implementation of these algorithms on current Intel distributed-memory parallel computers. The algorithms had been developed for shared-... [Pg.254]

T. R. Furlam and H. F. King,/. Comput. Chem., submitted for publication. Implementation of a Parallel Direct SCF Algorithm on a Distributed Memory Computer. [Pg.309]

H. Berryman, J. Saltz, and J. Scroggs, Concurrency Pract. Exp., 3, 159 (1991). Execution Time Support for Adaptive Scientific Algorithms on Distributed Memory Architectures. [Pg.310]

An important consideration in the parallelization of quantum chemistry algorithms for distributed memory computers is the data distribution. The simplest approach is to replicate all the data on all the nodes. Considering, for example, a parallel direct HF computation, this means that each node must store the Fock matrix, the density matrix, the eigenvectors and a variety of other matrices depending on the implementation. Thus, the storage requirement on each node becomes 0 n ), where n is the number of basis functions, and for the large basis sets that can be handled in a reasonable amount of time on a massively parallel computer, this storage requirement may become prohibitive. [Pg.1993]


See other pages where Distributed memory algorithms is mentioned: [Pg.485]    [Pg.20]    [Pg.20]    [Pg.461]    [Pg.117]    [Pg.232]    [Pg.232]    [Pg.233]    [Pg.233]    [Pg.234]    [Pg.236]    [Pg.238]    [Pg.253]    [Pg.255]    [Pg.255]    [Pg.255]    [Pg.257]    [Pg.259]    [Pg.261]    [Pg.266]    [Pg.271]    [Pg.275]    [Pg.281]    [Pg.207]    [Pg.241]    [Pg.45]    [Pg.76]    [Pg.143]    [Pg.145]    [Pg.164]    [Pg.144]    [Pg.98]    [Pg.121]    [Pg.72]    [Pg.2011]    [Pg.645]    [Pg.31]    [Pg.518]    [Pg.156]    [Pg.1993]    [Pg.1997]    [Pg.3702]   
See also in sourсe #XX -- [ Pg.253 ]




SEARCH



Distributed memory

© 2024 chempedia.info