Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Workstation clusters

This completes the outline of FAMUSAMM. The algorithm has been implemented in the MD simulation program EGO VIII [48] in a sequential and a parallelized version the latter has been implemented and tested on a number of distributed memory parallel computers, e.g., IBM SP2, Cray T3E, Parsytec CC and ethernet-linked workstation clusters running PVM or MPI. [Pg.83]

Our multipole code D-PMTA, the Distributed Parallel Multipole Tree Algorithm, is a message passing code which runs both on workstation clusters and on tightly coupled machines such as the Cray T3D/T3E [11]. Figure 3 shows the parallel performance of D-PMTA on a moderately large simulation on the Cray T3E the scalability is not affected by adding the macroscopic option. [Pg.462]

The Fourier sum, involving the three dimensional FFT, does not currently run efficiently on more than perhaps eight processors in a network-of-workstations environment. On a more tightly coupled machine such as the Cray T3D/T3E, we obtain reasonable efficiency on 16 processors, as shown in Fig. 5. Our initial production implementation was targeted for a small workstation cluster, so we only parallelized the real-space part, relegating the Fourier component to serial evaluation on the master processor. By Amdahl s principle, the 16% of the work attributable to the serially computed Fourier sum limits our potential speedup on 8 processors to 6.25, a number we are able to approach quite closely. [Pg.465]

NAMD [7] was born of frustration with the maintainability of previous locally developed parallel molecular dynamics codes. The primary goal of being able to hand the program down to the next generation of developers is reflected in the acronym NAMD Not (just) Another Molecular Dynamics code. Specific design requirements for NAMD were to run in parallel on the group s then recently purchased workstation cluster [8] and to use the fast multipole algorithm [9] for efficient full electrostatics evaluation as implemented in DPMTA [10]. [Pg.473]

The docking calculations are usually iterated separately for each molecule of a database. Since the calculation times per molecule are still quite large even for fast algorithms (about a minute per ligand), the use of a workstation cluster or parallel hardware is of great advantage. Within FlexX, a task... [Pg.36]

Most MPP vendors will provide HPF (or at least the subset) including extrinsic procedures on most message-passing machines. However, HPF is still not widely available, and portability will remain an issue because many facets such as I/O are not defined. Applied Parallel Research has a product called xHPF that provides a partial HPF subset with portability between true MPP machines and workstation clusters (using PVM,, Express, or Linda ). The performance and robustness of this package are not known to us. [Pg.227]

Full MPI is big, with about 50 defined constants and about 120 routines in the FORTRAN interface. To encourage rapid implementation, a subset has been defined that includes most of the point-to-point routines, simple global operations, and restrictions on the nature of communicators. Several vendors promise to have early implementations, and a portable version is already available from Argonne National Laboratory. The portable version will be built upon P4 or PVM on workstation clusters. [Pg.228]

Examples of using direct SCF on workstation clusters and the general exploitation of networked supercomputing resources are provided by Fey-ereisen et al., Brode et al., and Liithi and Almlof. Feyereisen and coworkers used a cluster of workstations as an alternative supercomputing resource, implementing the direct SCF and RPA code DISCO with different... [Pg.252]

Parallel Direct SCF and Gradient Program for Workstation Clusters. [Pg.306]

HITERM integrates high-performance computing on parallel machines and workstation clusters with a decision-support approach based on a hybrid expert system. Typical applications are in the domain of technological risk assessment and... [Pg.265]

Parallel Evolutionary Algorithms for Optimizing the Unifac Matrix on Workstation Clusters... [Pg.11]

In the past, the optimization program has been run not only on IBM machines but also on a INTEL-PARAGON in Jiilich and KSR-1 computers at Gottingen, Hannover and Braunschweig and especially on workstation clusters at the Technical University of Braunschweig. To this end, adapted communications subroutines are incorporated in the code, depending on the computing array used. [Pg.18]

Parallel determination of the quality values for all parameter sets by the respective codes on the workstation cluster... [Pg.19]

Adaptive evolutionary algorithms have proved themselves as a robust and extraordinarily effective optimization method for tasks in the energy and process technology areas. The simple parallel algorithmic structures, which require only minor communication resources, allow workstation clusters to be used efficiently with PVM communications software. This means that complex parameter optimizations of multivariable functions can be performed at reasonable processing speeds. The results recorded to date are promising. The development of a code suitable for industrial applications has been completed. [Pg.20]

NWChem (www.nwchem-sw.org) is a free ab initio and density functional program designed to run on parallel supercomputers and workstation clusters and also runs on Windows and Macintosh computers [M. Valiev etal., Comput. Phys. Commun., 181,1477 (2010)]. [Pg.501]

Myrinet was one of the first networks to be developed expressly for the SAN and cluster market. With a cost of approximately 1600 per node, Myrinet was initially reserved for the more expensive workstation clusters. But with its superior latency properties of 20 /u.sec or less, it permitted some classes of more tightly coupled applications to run efficiently that would perform poorly on Ethernet-based clusters. More recently, reduced pricing has expanded its suitability to lower cost systems and has proven very popular. [Pg.8]

SANs are used to interconnect PC clusters or workstation clusters forming server systems and to connect data vaults and other I/O subsystems to the overall system. SANs are also used to connect the individual processor nodes... [Pg.45]

On a workstation or a small workstation cluster, the number of relevant geometries should not exceed a few hundred. Quasicontinuous distributions imply the calculation of a larger number of geometries and may be intractable today except for the most simple systems. [Pg.69]

Speedup Results The parallel-solver efficiency in 3D simulation of the industrial growth of 100-mm and 300-mm silicon crystals has been assessed using two workstation clusters with Fast Ethernet and Myrinet as the communication hardware that differ in the latency and the effective bandwidth of communication. Typical speedups are presented in Table 6.1 (the efficiencies are indicated in the parentheses). [Pg.180]


See other pages where Workstation clusters is mentioned: [Pg.469]    [Pg.272]    [Pg.278]    [Pg.280]    [Pg.37]    [Pg.60]    [Pg.224]    [Pg.234]    [Pg.241]    [Pg.241]    [Pg.244]    [Pg.252]    [Pg.365]    [Pg.365]    [Pg.341]    [Pg.92]    [Pg.11]    [Pg.19]    [Pg.19]    [Pg.344]    [Pg.5]    [Pg.41]    [Pg.147]    [Pg.104]    [Pg.1992]    [Pg.2221]   
See also in sourсe #XX -- [ Pg.37 ]

See also in sourсe #XX -- [ Pg.227 , Pg.228 , Pg.234 , Pg.239 , Pg.244 , Pg.252 , Pg.296 ]




SEARCH



Workstations

© 2024 chempedia.info