Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parallelized computer codes

While TD-DFT continuum calculations for molecules, such as camphor, are not yet quite practicable, efforts to create highly parallel computer codes capable of tackling this scale of problem are expected to be fruitful soon. In the meantime TD-DFT studies for computahonaUy less demanding small molecules [66-68] or highly symmetric molecules, such as SFg [79], have provided indicahons of the general value of the inclusion of electron response effects. [Pg.299]

Molecular dynamics simulations are capable of addressing the self-assembly process at a rudimentary, but often impressive, level. These calculations can be used to study the secondary structure (and some tertiary structure) of large complex molecules. Present computers and codes can handle massive calculations but cannot eliminate concerns that boundary conditions may affect the result. Eventually, continued improvements in computer hardware will provide this added capacity in serial computers development of parallel computer codes is likely to accomplish the goal more quickly. In addition, the development of realistic, time-efficient potentials will accelerate the useful application of dynamic simulation to the self-assembly process. In addition, principles are needed to guide the selec-... [Pg.143]

We now consider the status of parallelized computer codes and algorithms for computation in quantum chemistry, molecular dynamics, and reaction dynamics. Our focus is on the migration to parallel hardware of the major production codes commonly used, both on workstations and on conventional supercomputers, within the chemistry community. [Pg.240]

There is an ample spectrum of physics-based simulations available in the literature. Most of them have been published over the last two decades after supercomputing centers open for pubUc research became available in the mid- to late 1990s and early 2000s, which boosted the capacity of modelers to cmiduct regional-scale simulations at resolutions not possible before parallel computer codes were developed and tested. Some of these simulations have already been cited to ihus-trate the various aspects involved in physics-based simulation. However, they cannot all possibly be covered here, thus only a small selection is addressed next. The selection includes examples of scenario and real earthquake simulations that have been used in verification and validation studies those are the simulations of the Great Southern California ShakeOut and the 2008 Chino Hills, California, earthquake. Also covered here to a lesser extent is the application of physics-based simulation as a tool to construct a physics-based framework for probabilistic seismic hazard analysis, as it is done in the CyberShake project of the Southern California Earthquake Center. [Pg.1918]

These codes have stressed the current supercomputer, whether it was the CDC 6600 in the 1970s, the Grays in the 1980s, or the massively parallel computers of the 1990s. Multimillion cell calculations are routinely performed at Sandia National Laboratories with the CTH [1], [2] code, yet... [Pg.324]

Computer codes Because of the computer s ability to handle the complicated mathematics, most of the compounded and feedback effects are built into computer codes for analyzing dynamic instabilities. Most of these codes can analyze one or more of the following instabilities density wave instability, compound dynamic instabilities such as BWR instability and parallel-channel instability, and pressure drop oscillations. [Pg.506]

The computational demands for Hollander s simulations (in 2000) were typically of the order of 1 CPU week on a PentiumIII/500 MHz at 400 MB of memory per run. The computer code was fully parallelized and was run on a 12 CPU Beowulf cluster. [Pg.202]

Exciting new developments, not discussed in the review are the extension of time-dependent wavepacket reactive scattering theory to full dimensional four-atom systems [137,199-201], the adaptation of the codes to use the power of parallel computers [202], and the development of new computational techniques for acting with the Hamiltonian operator on the wavepacket [138]. [Pg.284]

Instead of enhancing the performance of computers, many theoreticians have tried to enhance the efficiency of computation by improving computational codes. One of the approaches is to reduce the dependence of the computation for the present step on the computational results for previous steps along the system trajectory or increase the parallelism of the computation. The other effective approach is the use of concept of hierarchical coupling of paradigms... [Pg.311]

The characteristics of Cholesky s procedure allow one to construct computational codes working in parallel form, [52a,b], testing the effect of several functions on the linear coefficients concurrently. [Pg.185]

In designing a complex parallel program, it is useful to think of any parallel computer in terms of a NUMA model in which all memory, anywhere in the system, can be accessed by any processor as needed. One can then focus independently on the questions of memory access methods (which determine coding style for an application) and memory access costs (which ultimately determine program efficiency). [Pg.213]

In prior sections of this chapter, we discussed a variety of programming languages, as well as program structuring techniques that have been found useful for writing parallel computational chemistry codes. Some parallel systems are described in the chapter appendix. In this section, we delve more deeply into the more specific topics of ... [Pg.231]

Before embarking on a detailed account of the work to date in parallelizing QC codes at both the Hartree—Fock and post-Hartree—Fock levels, we draw attention to work that describes the provision of general parallel computational chemistry capabilities. [Pg.249]

The development and efficient implementation of a parallel direct SCF Hartree-Fock algorithm, with gradients and random phase approximation solutions, are described by Feyereisen and Kendall, who discussed details of the structure of the parallel version of DISCO. Preliminary results for calculations using the Intel-Delta parallel computer system were reported. The data showed that the algorithms were efficiently parallelized and that throughput of a one-processor Cray X-MP was reached with about 16 nodes on the Intel-Delta. The data also indicated that sequential code, which was not a bottleneck on traditional supercomputers, became time-critical on parallel computers. [Pg.250]


See other pages where Parallelized computer codes is mentioned: [Pg.1957]    [Pg.1957]    [Pg.286]    [Pg.92]    [Pg.286]    [Pg.1902]    [Pg.1957]    [Pg.1957]    [Pg.286]    [Pg.92]    [Pg.286]    [Pg.1902]    [Pg.245]    [Pg.485]    [Pg.98]    [Pg.20]    [Pg.11]    [Pg.66]    [Pg.174]    [Pg.418]    [Pg.218]    [Pg.218]    [Pg.451]    [Pg.467]    [Pg.162]    [Pg.349]    [Pg.192]    [Pg.553]    [Pg.73]    [Pg.268]    [Pg.288]    [Pg.4526]    [Pg.9]    [Pg.221]    [Pg.227]    [Pg.232]    [Pg.233]    [Pg.242]    [Pg.244]    [Pg.245]    [Pg.249]   
See also in sourсe #XX -- [ Pg.240 ]




SEARCH



Parallel computing

© 2024 chempedia.info