Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parallel implementation

Esselink K, Smit B and Hilbers P A J 1993 Efficient parallel implementation of molecular dynamics on a toroidal network. I. Parallelizing strategy J. Comput. Phys. 106 101-7... [Pg.2289]

Several groups have previously reported parallel implementations of multipole based algorithms for evaluating the electrostatic n-body problem and the related gravitational n-body problem [1, 2]. These methods permit the evaluation of the mutual interaction between n particles in serial time proportional to n logn or even n under certain conditions, with further reductions in computation time from parallel processing. [Pg.459]

The fifth and final chapter, on Parallel Force Field Evaluation, takes account of the fact that the bulk of CPU time spent in MD simulations is required for evaluation of the force field. In the first paper, BOARD and his coworkers present a comparison of the performance of various parallel implementations of Ewald and multipole summations together with recommendations for their application. The second paper, by Phillips et AL., addresses the special problems associated with the design of parallel MD programs. Conflicting issues that shape the design of such codes are identified and the use of features such as multiple threads and message-driven execution is described. The final paper, by Okunbor Murty, compares three force decomposition techniques (the checkerboard partitioning method. [Pg.499]

W. J. Meissen, J.R.M. Smits, G.H. Rolf and G. Kateman,Two-dimensional mapping of IR spectra using a parallel implemented self-organising feature map. Chemom. Intell. Lab. Syst., 18(1993) 195-204. [Pg.698]

However, since and -5 asymptote to the same function, one might approximate (U) = S dJ) in (3.57) so that the acceptance probability is a constant.3 The procedure allows trial swaps to be accepted with 100% probability. This general parallel processing scheme, in which the macrostate range is divided into windows and configuration swaps are permitted, is not limited to density-of-states simulations or the WL algorithm in particular. Alternate partition functions can be calculated in this way, such as from previous discussions, and the parallel implementation is also feasible for the multicanonical approach [34] and transition-matrix calculations [35],... [Pg.104]

Ahlrichs R, von Arnim M (1995) TURBOMOLE, parallel implementation of SCF, density functional, and chemical shift modules. In dementi E, Corongiu G (eds) Methods and techniques in computational chemistry. STEF, Cagliary Eichkorn K, Treutler O, Ohm H, Haser M, Ahlrichs R (1995) Chem Phys Lett 242 652 Becke AD (1988) Phys Rev A 38 3098 Perdew JP (1986) Phys Rev B 33 8822 Garrou PE (1985) Chem Rev 85 171 and references cited therein... [Pg.22]

So Hirata, Tensor contraction engine abstraction and automated parallel implementation of configuration-interaction, coupled-cluster, and many-body perturbation theories. J. Phys. Chem. A 107, 4940 (2003). [Pg.384]

Valeev, E.F., Janssen, C.L. Second-order Moller-Plesset theory with linear R12 terms (MP2-R12) revisited auxiliary basis set method and massively parallel implementation. J. Chem. Phys. 2004, 121, 1214-27. [Pg.147]

Dabdub, D., and J. H. Seinfeld, Numerical Advective Schemes Used in Air Quality Models—Sequential and Parallel Implementation, Atmos. Environ., 28, 3369-3385 (1994b). [Pg.934]

While selecting first-priority projects it would be preferable to use the high-priority measures that, however, would not exclude parallel implementation of projects pertaining to the measures of both blocks. [Pg.34]

The previous section has shown that turbulence combined with a different domain decomposition (i.e. a different number of processors for the following) is sufEcient to lead to totally different instantaneous flow realizations. It is expected that a perturbation in initial conditions will have the same effect as domain decomposition. This is verified in runs TC3 and TC4 which are run on one processor only, thereby eliminating issues linked to parallel implementation. The only difference between TC3 and TC4 is that in TC4, the initial solution is identical to TC3 except at one random point where a 10 perturbation is applied to the streamwise velocity component. Simulations with different locations of the perturbation were run to ensure that their position did not affect results. [Pg.296]

V. E. Taylor and B. Nour-Omid. A study of the factorization fill-in for a parallel implementation of the finite element method. Int. J. Numer. Meth. Eng., 37 3809-3823, 1994. [Pg.326]

Direct ab initio methods, in which data are recomputed when required, rather than being stored and retrieved, provide an alternative that seems more useful for parallel development. The simplest level of ad initio treatment (self-consistent field methods) can be readily parallelized when direct approaches are being exploited. Experience demonstrates, however, that data replication methods will not lead to truly scalable implementations, and several distributed-data schemes (described later) have been tried. These general approaches have also been used to develop scalable parallel implementations of density functional theory (DFT) methods and the simplest conventional treatment of electron correlation (second-order perturbation theory, MP2) by several groups. 3-118... [Pg.245]

Shared-memory parallel processing was certainly more successful for QC in earlier applications and continues to play a significant role in high performance computational chemistry. A coarse-grained parallel implementation scheme for the direct SCF method by Liithi et al. allowed for a near-asymptotic speed-up involving a very low parallelization overhead without compromising the vector performance of vector-parallel architectures. [Pg.247]

A parallel implementation of the COLUMBUS MRSDCl program system described by Schuler et al.24 uses a coarse-grained parallelization approach... [Pg.253]

In the drive to optimize use of available hardware to enable more robust simulations through increasing system size and simulation length, the optimization of MD algorithms for vector hardware has been extensively discussed in the literature, -203 vv ith considerable attention paid to the task of optimizing efficiency.204,20J provide some background to this effort because much of the work impacts subsequent parallel implementations. [Pg.258]

Sato, Tanaka, and Yao also reported a parallel implementation of the AMBER MD module.The target machine, the APIOOO distributed-memory parallel computer developed at Fujitsu, consisted of up to 1024 processor elements connected with three different networks. To obtain a higher degree of parallelism and better load balance between processors, a particle division method was developed to randomly allocate particles to processors. Experiments showed that a problem with 41,095 atoms can be processed 226 times faster with a 512-processor APIOOO than by a single processor. [Pg.271]

Whereas simulation packages (CHARMM, Discover, GROMOS, and AMBER) have probably made the largest strides in exploiting the potential of parallelism, the scalability of present parallel implementations is limited to at best a few dozen processors. [Pg.276]


See other pages where Parallel implementation is mentioned: [Pg.333]    [Pg.111]    [Pg.368]    [Pg.400]    [Pg.268]    [Pg.269]    [Pg.279]    [Pg.279]    [Pg.569]    [Pg.161]    [Pg.77]    [Pg.461]    [Pg.463]    [Pg.105]    [Pg.15]    [Pg.22]    [Pg.341]    [Pg.241]    [Pg.244]    [Pg.246]    [Pg.247]    [Pg.248]    [Pg.262]    [Pg.263]    [Pg.267]    [Pg.267]    [Pg.270]    [Pg.271]    [Pg.274]    [Pg.278]   
See also in sourсe #XX -- [ Pg.102 ]

See also in sourсe #XX -- [ Pg.8 , Pg.127 ]

See also in sourсe #XX -- [ Pg.5 , Pg.3127 ]




SEARCH



© 2024 chempedia.info