Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Molecular dynamics processor

Sikkenk, J.H., Indekeu, J.O., van Leeuwen, J.M.J., et al. (1988). Simulation of wetting and drying at solid-fluid interfaces on the Delft molecular dynamics processor. [Pg.183]

Parallel molecular dynamics codes are distinguished by their methods of dividing the force evaluation workload among the processors (or nodes). The force evaluation is naturally divided into bonded terms, approximating the effects of covalent bonds and involving up to four nearby atoms, and pairwise nonbonded terms, which account for the electrostatic, dispersive, and electronic repulsion interactions between atoms that are not covalently bonded. The nonbonded forces involve interactions between all pairs of particles in the system and hence require time proportional to the square of the number of atoms. Even when neglected outside of a cutoff, nonbonded force evaluations represent the vast majority of work involved in a molecular dynamics simulation. [Pg.474]

The rapid rise in computer speed over recent years has led to atom-based simulations of liquid crystals becoming an important new area of research. Molecular mechanics and Monte Carlo studies of isolated liquid crystal molecules are now routine. However, care must be taken to model properly the influence of a nematic mean field if information about molecular structure in a mesophase is required. The current state-of-the-art consists of studies of (in the order of) 100 molecules in the bulk, in contact with a surface, or in a bilayer in contact with a solvent. Current simulation times can extend to around 10 ns and are sufficient to observe the growth of mesophases from an isotropic liquid. The results from a number of studies look very promising, and a wealth of structural and dynamic data now exists for bulk phases, monolayers and bilayers. Continued development of force fields for liquid crystals will be particularly important in the next few years, and particular emphasis must be placed on the development of all-atom force fields that are able to reproduce liquid phase densities for small molecules. Without these it will be difficult to obtain accurate phase transition temperatures. It will also be necessary to extend atomistic models to several thousand molecules to remove major system size effects which are present in all current work. This will be greatly facilitated by modern parallel simulation methods that allow molecular dynamics simulations to be carried out in parallel on multi-processor systems [115]. [Pg.61]

How well has Dill s prediction held up In 2000, the first ever microsecond-long molecular dynamics simulation of protein folding was reported. It required 750,000 node hours (equal to the product of the number of hours times the number of processors) of computer time on a Cray T3 supercomputer. According to Dill s prediction, this length of simulation was not to be expected until around 2010. However, as noted above, Dill s analysis does not take into account large-scale parallelization—which, unless the computation is communications-limited, will effectively increase the speed of a computation in proportion to the number of processors available. [Pg.81]

In principle, the ideal description of a solution would be a quantum mechanical treatment of the supermolecule consisting of representative numbers of molecules of solute and solvent. In practice this is not presently feasible, even if only a single solute molecule is included. In recent years, however, with the advances in processor technology that have occurred, it has become possible to carry out increasingly detailed molecular dynamics or Monte Carlo simulations of solutions, involving hundreds or perhaps even thousands of solvent molecules. In these, all solute-solvent and solvent-solvent interactions are taken into account, at some level of sophistication. [Pg.35]

The system was equilibrated for more than 100 ps the equilibration was checked by monitoring energy trajectories and root mean square (rms) deviations from the X-ray structure. Using 8 parallel processors, it took 1 hour to calculate a 2 ps simulation. The restraints on the system during the molecular dynamics calculation were released in a stepwise mode, first on side chains and then on other atoms. The rms deviation of all Ca atoms between the cytochrome b subimit in the X-ray structure and the unrestrained model after 120 ps was 1.38 A. [Pg.125]

Einally, we draw attention to several review articles in this area. In 1986 Lowdin 23 considered various aspects of the historical development of computational QC in view of the development of both conventional supercomputers and large-scale parallel computers. More recently, Weineri24 presented a discussion on the programming of parallel computers and their use in molecular dynamics simulations, free energy perturbation, and large scale ab initio calculations, as well as the use of very elaborate graphical display programs in chemistry research. We also note a review on the use of parallel processors in... [Pg.245]

The implementation of molecular dynamics simulations on parallel computers needs a method that distributes over the processors both the evaluation of pair interactions and the integration of particle motions. The force terms involved in integrating the set of coupled differential equations (Newton s equations) characteristic of any MD simulation are typically nonlinear functions of the distance between pairs of atoms and may be either long-range or short-range. We use this attribute of the force terms in detailing the parallel algorithmic work conducted to date. [Pg.260]

As part of their efforts to employ parallel computers for molecular dynamics simulations, Schulten and co-workers generated a series of MD benchmarks based on their own program on a wide range of machines, including an Apple Macintosh II, a Silicon Graphics 320 VGX, a 32K-processor Conneaion Machine CM-200, a 60-node INMOS Transputer system, and a network of Sun workstations (using Linda).2 8 j e benchmarks demonstrated that the program runs very efficiently on many platforms (e.g., at sevenfold Cray 2 processor speed on the CM-200 and at Cray 2 processor speed on the Transputer system). [Pg.272]

A. D. Fincham and B. J. Ralston, Comput. Phys. Commun., 23, 127 (1981). Molecular Dynamics Simulation Using the Cray-1 Vector Processor. [Pg.310]

F. Mueller-Plathe, Comput. Phys. Commun., 61, 285 (1990). Parallellizing a Molecular Dynamics Algorithm on a Multi-Processor Workstation. [Pg.311]

A technique that is increasingly popular is molecular dynamics. This enables the study of free energies and of the effects of changing temperature and pressure. This technique is notoriously computer resource-hungry but increases in storage capacity, memory and processor speed have made it more feasible and it is now possible to combine ab initio and molecular dynamics calculations. The next section is devoted to this and related topics. [Pg.119]

Rapaport, D.C. Enhanced molecular dynamics performance with a programmable graphics processor, arXiv Physics, 2009, arXiv 0911.5631vl... [Pg.19]

Davis, J., Ozsoy, A., Patel, S., Taufer, M. Towards Large-Scale Molecular Dynamics Simulations on Graphics Processors, Springer, Berlin/Heidelberg, 2009. [Pg.19]

Molecular dynamics is a true first principles dynamic molecular model. It simply solves the equations of motion. Given an intermolecular potential, MD provides the exact spatial and temporal evolution of the system. The stiffness caused by fast vibrations compared with slow molecular relaxations demands relatively small time steps and challenges current simulations. As an example, the time scale associated with vibrations is a fraction of a picosecond, whereas those associated with diffusion or reaction may easily be in the seconds to hours range depending on the activation energy. Consequently, MD on a single processor is usually limited to short time and length scales (e.g., pico- to nanoseconds and 1-2 nm). [Pg.1717]

The most accurate method of calculating the dynamical behaviour of surfactants is to integrate the equations of motion of all of the atoms in the system. It is obvious that the molecular dynamics calculations described in this chapter give only a rough estimate of the real situation. Such MD techniques require computer processor speeds and memory capacities that currently limit their applicability to a few nanoseconds of molecular motion. This is inadequate for many chemical processes of surfactants which occur on the microsecond (or longer) time-scales. Effects which are dependent on molecular diffusion cannot be investigated due to the... [Pg.547]

Among many parallel implementations of molecular dynamics code [8,24,30,35] two approaches for particles redistribution between processors are employed (see Figure 26.14). [Pg.744]

In principle, atomistic studies with good quality force fields should be sufficient to represent liquid crystal phases or polymer melts to a high level of accuracy and most material properties (order parameters, densities, viscosities elastic constants etc.) should be available from such simulations. In practise, this is rarely (if ever) the case. For example, using molecular dynamics, the computational cost of atomistic simulations is such that it is rarely possible to simulate for longer than a few tens of nanoseconds for (say) 10000 atoms. Even these modest times often require several months of CPU time on todays fastest processors. [Pg.59]

This article reviews some of the progress made in using parallel processor systems to study macromolecules. After an initial introduction to the key concepts required to understand parallelisation, the main part of the article focuses on molecular dynamics. It is shown that simple replicated data methods can be used to carry out molecular dynamics effectively, without the need for major changes from the approach used in scalar codes. Domain decomposition methods are then introduced as a path toward reducing inter-processor communication costs further to produce truly scalable simulation algorithms. Finally, some of the methods available for carrying out parallel Monte Carlo simulations are discussed. [Pg.336]


See other pages where Molecular dynamics processor is mentioned: [Pg.622]    [Pg.563]    [Pg.622]    [Pg.563]    [Pg.472]    [Pg.485]    [Pg.855]    [Pg.80]    [Pg.128]    [Pg.136]    [Pg.128]    [Pg.135]    [Pg.99]    [Pg.222]    [Pg.242]    [Pg.243]    [Pg.250]    [Pg.262]    [Pg.267]    [Pg.346]    [Pg.3]    [Pg.99]    [Pg.17]    [Pg.1183]    [Pg.220]    [Pg.133]    [Pg.2298]    [Pg.53]    [Pg.191]    [Pg.340]    [Pg.353]    [Pg.356]   


SEARCH



Processors

© 2024 chempedia.info