Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parallel version

This completes the outline of FAMUSAMM. The algorithm has been implemented in the MD simulation program EGO VIII [48] in a sequential and a parallelized version the latter has been implemented and tested on a number of distributed memory parallel computers, e.g., IBM SP2, Cray T3E, Parsytec CC and ethernet-linked workstation clusters running PVM or MPI. [Pg.83]

Parallelizing this method was not difficult, given that we already had parallel versions of several multipole algorithms to start from. The entire macroscopic assembly, given its precomputed transfer function, is handled by a single processor which has to perform k extra multipole expansions, one for each level of the macroscopic tree. Each processor is already typically performing many hundreds or thousands of such expansions, so the extra work is minimal. [Pg.462]

Molecular dynamics simulation package with various force field implementations, special support for AMBER. Parallel version and Xll trajectory viewer available. http //ganter.chemie.uni-dortmund.de/MOSCITO/... [Pg.400]

It comes with two glass vessels suitable for volumes of 3-30 mL at operation limits of 200 °C and 15 bar. A cooling mechanism returns the reaction mixture to 30 °C after the irradiation. For enhanced optimization, the PRO-6 rotor, a parallel version of the MonoPREP module with six identical vessels, can be applied. [Pg.37]

Using the parallel version of our Cl program, a series of test runs were carried out on an IBM SP machine consisting of RS/6000 model P2SC (120 MHz) nodes. [Pg.274]

Anyway, the first step toward any receptor-based COSMO-RS calculations is the calculation of qualitatively acceptable er-profiles of the receptor regions of enzymes. In a performance test of a highly parallel version of the TURBOMOLE program on the supercomputer at the Research Center Jiilich [141], we could show that TURBOMOLE presently can handle single point, i.e., fixed geometry, BP-SVP DFT-calculations of enzymes up to about 1,500 atoms. On the basis of preliminary data, an enzyme of 1,000 atoms requires about 6 CPU h on 32 CPUs of a supercomputer cluster with a minimum quadratic scaling of CPU-time with the number of atoms of the enzymes. Thus for medium-sized enzymes we would require a minimum of 600 h on such a supercomputer, which would be rather expensive, even if all the technical problems arising at these molecule sizes would be solved. Therefore, brute-force DFT calculations appear to be unfeasible at present, but they may be possible in the future. [Pg.194]

Local Space Average Color (Parallel Version)... [Pg.359]

The formalisms derived above were implemented in the Ab Initio Valence Bond program TURTLE [3], The logo for the program is shown in Fig. 1. This is the logo for the parallel version, as is obvious from the number of turtles depicted. [Pg.91]

Few of the published efforts in parallelizing QC codes appear to be directed at the most widely used packages the exceptions include the development of parallel versions of GAMESS (at Iowa State University 3,94,io6 at the EPSRC Daresbury Laboratory 96.u>4,io8) FIONDO (at IBM Kingston " ), and TURBOMOLE (at Karlsruhe ). Though not as widely used. [Pg.244]

The development and efficient implementation of a parallel direct SCF Hartree-Fock algorithm, with gradients and random phase approximation solutions, are described by Feyereisen and Kendall, who discussed details of the structure of the parallel version of DISCO. Preliminary results for calculations using the Intel-Delta parallel computer system were reported. The data showed that the algorithms were efficiently parallelized and that throughput of a one-processor Cray X-MP was reached with about 16 nodes on the Intel-Delta. The data also indicated that sequential code, which was not a bottleneck on traditional supercomputers, became time-critical on parallel computers. [Pg.250]

Das and Saltz developed a fully distributed data parallel version of CHARMM. 1 8 jhe implementation used Saltz and co-workers PARTI (Parallel Automated Runtime Toolkit at ICASE) primitives, These primitives allow the distribution of arrays across the local memories of multiple nodes and provide global addressing of these arrays even on truly distributed memory machines. Whereas the replicated-data version of Brooks and Hodosceki ... [Pg.270]

Hayes et al. designed a shared-memory parallel version of their bending-corrected rotating linear model (BCRLM) for calculating approximate quantum scattering results. The computational algorithms and their distribution are nearly identical to what has been presented and will not be repeated.274,282... [Pg.282]

R. J. Harrison and R. A. Kendall, Theor. Chim. Acta, 79, 337 (1991). A Parallel Version of ARGOS A Distributed Memory Model for Shared Memory UNIX Computers. [Pg.302]

C. J. Cramer, private communication, 1993. AMSOL is based on AMPAC, with the inclusion of solvation effects. A fully parallel version of this code is under development by Cray Research Inc. See also, C. J. Cramer and D. G. Truhlar, Chapter 1, this volume. Continuum Solvation Models Classical and Quantum Mechanical Methods. [Pg.307]

Several options are available in FLOTRAN for representing fractured media. The equivalent continuum model (ECM) formulation represents fracture and matrix continua as an equivalent single continuum. Two distinct forms of dual continuum models also are available, defined in terms of connectivity of the matrix. These models are the dual continuum connected matrix (DCCM) and the dual continuum disconnected matrix (DCDM) options. A parallel version of the code, PFLOTRAN, has been developed based on the PETSC parallel library at Argonne National Laboratory. [Pg.2307]

Ultimately, fracture results from the breaking of atomic bonds. For a brittle solid, the balancing of the energy release rate G and the dissipative processes associated with the creation of new free surface is played out explicitly on an atom by atom basis if one carries out a molecular dynamics simulation of the relevant atomic-level processes. A number of calculations illustrate the level to which such calculations can be pushed using parallel versions of molecular dynamics codes. An especially beautiful sequence of snapshots from the deformation history of a solid undergoing fracture is shown in fig. 12.33. The key point illustrated by... [Pg.732]

Our simulation program was a modifieation of the MC-MOLDYN package (46) the alterations involved changing the integration seheme to allow for flie fractional particle dynamies with a Nose-Hoover thermostat and parallelizing it by means of a shared-memory approach. The parallel version was ran on the 128-proeessor Cray Origin 2000 at Par-... [Pg.450]


See other pages where Parallel version is mentioned: [Pg.72]    [Pg.128]    [Pg.704]    [Pg.114]    [Pg.115]    [Pg.280]    [Pg.321]    [Pg.212]    [Pg.257]    [Pg.257]    [Pg.22]    [Pg.251]    [Pg.270]    [Pg.271]    [Pg.694]    [Pg.61]    [Pg.351]    [Pg.337]    [Pg.207]    [Pg.293]    [Pg.50]    [Pg.216]    [Pg.46]    [Pg.430]    [Pg.351]    [Pg.43]    [Pg.414]    [Pg.23]    [Pg.293]    [Pg.2645]    [Pg.5]    [Pg.15]    [Pg.97]    [Pg.153]   
See also in sourсe #XX -- [ Pg.343 ]




SEARCH



Version

Versioning

© 2024 chempedia.info