Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parallel algorithms

Bertsekas, D. and Castanon, D. (1991) Parallel synchronous and asynchronous implementation of the auction algorithm. Parallel Comput, 17 (6/7), 707-732. [Pg.89]

Since no communication is reqviired by this algorithm, parallelism was simulated by sequentially running batches of integrals. This permitted data to be collected for more processes than the nmnber of available processors in the employed cluster. [Pg.125]

Griebel, M., Knapek, S., Zumbusch, G. Numerical simulation in molecular dynamics. In Numerics, Algorithms, Parallelization. Texts in Computational Science and Engineering, vol. [Pg.426]

Griebel M, Knapek S, Zumbusch G (2009) Numerical simulation in molecular dynamics numerics, algorithms, parallelization, applications. Springer, HeidelbCTg... [Pg.100]

By choosing the proper correlation algorithm, it is possible to realise sensitive filters for other types of defects (e.g. corrosion). Fig. 5.2 shows an example for the suppression of signals which do not exhibit the expected defect stmcture (Two parallel white lines near upper central rim portion of Fig. 5.2). The largest improvement in SNR is obtained here by using the expression (ai ai+x /ai+yj), since for a gradiometric excitation, one expects the crack response to show two maxima (a, aj+x) with a minimum (a m) in the centre (see Fig. 5.3). [Pg.262]

Watson S C and Carter E A 2000 Linear-scaling parallel algorithms for the first principles treatment of metals Comp. Pbys. Common. 128 67-92... [Pg.2233]

Heermann D W and Burkitt A N 1995 Parallel algorithms for statistical physics problems The Monte Carlo Method In Condensed Matter Physios vol 71 Toplos In Applied Physios ed K Binder (Berlin Springer) pp 53-74... [Pg.2290]

This completes the outline of FAMUSAMM. The algorithm has been implemented in the MD simulation program EGO VIII [48] in a sequential and a parallelized version the latter has been implemented and tested on a number of distributed memory parallel computers, e.g., IBM SP2, Cray T3E, Parsytec CC and ethernet-linked workstation clusters running PVM or MPI. [Pg.83]

James F. Leathrum and John A. Board. The parallel fast multipole algorithm in three dimensions. Technical report. Dept, of Electrical Engineering, Duke University, Durham, 1992. [Pg.95]

Trobec, R. Jerebic, L D.Janezic, D. Parallel Algorithm for Molecular Dynamics Integration. Parallel Computing 19 (1993) 1029-1039... [Pg.346]

Yoshida, H. Recent Progress in the Theory and Application of Symplectic Integrators. Celestial Mechanics and Dynamical Astronomy 56 (1993) 27-43 Trobec, R., Merzel, F., Janezic, D. On the Complexity of Parallel Symplectic Molecular Dynamics Algorithms. J. Chem. Inf. Comput. Sci. 37 (1997) 1055-1062... [Pg.347]

Several groups have previously reported parallel implementations of multipole based algorithms for evaluating the electrostatic n-body problem and the related gravitational n-body problem [1, 2]. These methods permit the evaluation of the mutual interaction between n particles in serial time proportional to n logn or even n under certain conditions, with further reductions in computation time from parallel processing. [Pg.459]

Parallelizing this method was not difficult, given that we already had parallel versions of several multipole algorithms to start from. The entire macroscopic assembly, given its precomputed transfer function, is handled by a single processor which has to perform k extra multipole expansions, one for each level of the macroscopic tree. Each processor is already typically performing many hundreds or thousands of such expansions, so the extra work is minimal. [Pg.462]

Our multipole code D-PMTA, the Distributed Parallel Multipole Tree Algorithm, is a message passing code which runs both on workstation clusters and on tightly coupled machines such as the Cray T3D/T3E [11]. Figure 3 shows the parallel performance of D-PMTA on a moderately large simulation on the Cray T3E the scalability is not affected by adding the macroscopic option. [Pg.462]

J. A. Board, Jr. et al.. Scalable variants of Multipole-Accelerated Algorithms for Molecular Dynamics Applications, Proceedings, Seventh SIAM Conference on Parallel Processing for Scientific Computing, SIAM, Philadelphia (1995), pp. 295-300. [Pg.470]

W. T. Rankin and J. A. Board, Jr., A Portable Distributed Implementation of the Parallel Multipole Tree Algorithm, Proceedings, Fourth IEEE International Symposium on High Performance Distributed Computing, IEEE Computer Society Press (1995), pp. 17-22. [Pg.471]

Avoiding Algorithmic Obfuscation in a Message-Driven Parallel MD Code... [Pg.472]

NAMD [7] was born of frustration with the maintainability of previous locally developed parallel molecular dynamics codes. The primary goal of being able to hand the program down to the next generation of developers is reflected in the acronym NAMD Not (just) Another Molecular Dynamics code. Specific design requirements for NAMD were to run in parallel on the group s then recently purchased workstation cluster [8] and to use the fast multipole algorithm [9] for efficient full electrostatics evaluation as implemented in DPMTA [10]. [Pg.473]

Rankin, W., Board, J. A portable distributed implementation of the parallel multipole tree algorithm. IEEE Symposium on High Performance Distributed Computing. Duke University Technical Report 95-002. [Pg.481]

The force decomposition algorithm maps all possible interactions to processors and does not require inter-processor communication during the force calculation phase of MD simulation. However, to obtain the net force on each particle for the update phase would need global communication. In this section, we will present parallel algorithms based on force decomposition. [Pg.486]

The complexity analysis shows that the load is evenly balanced among processors and therefore we should expect speedup close to P and efficiency close to 100%. There are however few extra terms in the expression of the time complexity (first order terms in TV), that exist because of the need to compute the next available row in the force matrix. These row allocations can be computed ahead of time and this overhead can be minimized. This is done in the next algorithm. Note that, the communication complexity is the worst case for all interconnection topologies, since simple broadcast and gather on distributed memory parallel systems are assumed. [Pg.488]

R. Murty and D. Okunbor, Efficient parallel algorithms for molecular dynamics simulations , submitted to Parallel Computing. [Pg.493]

S. Plimpton, Fast parallel algorithms for short-range molecular dynamics , J. Comput. Phys., Vol 117, no 1, 1-19, 1995. [Pg.493]

R. Trobec, I. Jerebic and D. Janezic, Parallel algorithms for molecular dynamics integration , Parallel Computing, Vol 19, no 9, 1029-39, 1993. [Pg.494]


See other pages where Parallel algorithms is mentioned: [Pg.222]    [Pg.495]    [Pg.211]    [Pg.17]    [Pg.15]    [Pg.100]    [Pg.222]    [Pg.495]    [Pg.211]    [Pg.17]    [Pg.15]    [Pg.100]    [Pg.166]    [Pg.217]    [Pg.12]    [Pg.227]    [Pg.459]    [Pg.472]    [Pg.473]    [Pg.473]    [Pg.476]    [Pg.477]    [Pg.479]    [Pg.483]    [Pg.485]    [Pg.486]    [Pg.486]    [Pg.487]    [Pg.487]   
See also in sourсe #XX -- [ Pg.112 ]




SEARCH



Algorithm using Parallel Processing

Appendix C Alternative parallel algorithm

Data distribution, parallel algorithms

Genetic algorithms parallel

HiFi from parallel algorithm to fixed-size VLSI processor array

Integral-direct algorithm parallel

Interprocessor communication, parallel algorithms

Parallel Algorithm Distributing Shell Pairs

Parallel algorithms, quantum dynamics

Parallel and Series Algorithms

Parallel clustering algorithm

Parallel control algorithm

Parallel coupled-cluster algorithm

Parallel quantum chemistry algorithms

Parallelized link cells algorithm

Performance of the Parallel Algorithms

© 2024 chempedia.info