Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parallel computer shared memory

As noted above, one of the goals of NAMD 2 is to take advantage of clusters of symmetric multiprocessor workstations and other non-uniform memory access platforms. This can be achieved in the current design by allowing multiple compute objects to run concurrently on different processors via kernel-level threads. Because compute objects interact in a controlled manner with patches, access controls need only be applied to a small number of structures such as force and energy accumulators. A shared memory environment will therefore contribute almost no parallel overhead and generate communication equal to that of a single-processor node. [Pg.480]

Supercomputers from vendors such as Cray, NEC, and Eujitsu typically consist of between one and eight processors in a shared memory architecture. Peak vector speeds of over 1 GELOP (1000 MELOPS) per processor are now available. Main memories of 1 gigabyte (1000 megabytes) and more are also available. If multiple processors can be tied together to simultaneously work on one problem, substantially greater peak speeds are available. This situation will be further examined in the section on parallel computers. [Pg.91]

MIMD Multicomputers. Probably the most widely available parallel computers are the shared-memory multiprocessor MIMD machines. Examples include the multiprocessor vector supercomputers, IBM mainframes, VAX minicomputers. Convex and AUiant rninisupercomputers, and SiUcon... [Pg.95]

This decreasing efficiency is a general characteristic of shared memory, shared bus computers. This example shows unusually high efficiency compared with many other programs. This may be because LINPACK is such a common benchmark that much effort has been devoted to optimising it for both vector and parallel computers. [Pg.96]

Transputers. At higher levels of coimectedness there is a wide variety of parallel computers. A great many parallel computers have been built using INMOS Transputer chips. Individual Transputer chips mn at 2 MELOPS or greater. Transputer chips have four communication channels, so the chips can readily be intercoimected into a two-dimensional mesh network or into any other interconnection scheme where individual nodes are four-coimected. Most Transputer systems have been built as additions to existing host computers and are SIMD type. Each Transputer has a relatively small local memory as well as access to the host s memory through the interconnection network. Not surprisingly, problems that best utilize local memory tend to achieve better performance than those that make more frequent accesses to host memory. Systems that access fast local memory and slower shared memory are often referred to as NUMA, nonuniform memory access, architecture. [Pg.96]

In working through process control examples, we found that many calculations, data checks, rate checks and other computationally intensive tasks are done at the first level of inference. Considerations of computational efficiency led to a design utilizing two parallel processors with a shared memory (Figure 1). One of the processors is a 68010 programmed in C code. This processor performs computationally intensive, low level tasks which are directed by the expert system in the LISP processor. [Pg.71]

Shared-memory parallel processing was certainly more successful for QC in earlier applications and continues to play a significant role in high performance computational chemistry. A coarse-grained parallel implementation scheme for the direct SCF method by Liithi et al. allowed for a near-asymptotic speed-up involving a very low parallelization overhead without compromising the vector performance of vector-parallel architectures. [Pg.247]

Rendell et al. compared three previously reported algorithms to the fourth-order triple excitation energy component in MBPT." The authors investigated the implementation of these algorithms on current Intel distributed-memory parallel computers. The algorithms had been developed for shared-... [Pg.254]

Hayes et al. designed a shared-memory parallel version of their bending-corrected rotating linear model (BCRLM) for calculating approximate quantum scattering results. The computational algorithms and their distribution are nearly identical to what has been presented and will not be repeated.274,282... [Pg.282]

R. J. Harrison and R. A. Kendall, Theor. Chim. Acta, 79, 337 (1991). A Parallel Version of ARGOS A Distributed Memory Model for Shared Memory UNIX Computers. [Pg.302]

Vector and Parallel Algorithms for the Molecular Dynamics Simulation of Macromolecules on Shared-Memory Computers. [Pg.310]

Parallel computers can be divided into two classes, based on whether the processors in the system have their own private memory or share a common memory. In a distributed memory system, the processors communicate with each other by sending and receiving messages through a communication network connecting all the processors. The problem to be solved must be explicitly partitioned by the programmer onto the various processors in such a way that load balancing is maintained and communication between processors is minimized and well ordered. For some problems it may not be easy or even... [Pg.1106]

We have seen in the previous section that a massively parallel computer consists of a number of nodes connected via a communication network, and that the nodes comprise small groups of processors that share memory and other resources, although a node may also contain just a single processor. Each individual node in a parallel computer is typically essentially the same as a personal computer with the addition of specialized hardware (a host channel adaptor (HCA) or a network interface card (NIC)) to connect the computer to the network. Parallelizing an application is not strictly a matter of providing for... [Pg.31]

Each of the individual nodes discussed in the previous section can be a MIMD parallel computer. Larger MIMD machines can be constructed by connecting many MIMD nodes via a high-performance network. While each node can have shared memory, memory is typically not shared between the nodes, at least not at the hardware level. Such machines are referred to as distributed memory computers or clusters, and in this section we consider such parallel computers in more detail. [Pg.34]

The BzzMinimizationMono class does not use parallel computing however, the BzzMinimizationMonoMP class does and is valid for all cases from the previous class a multiprocessor machine, an adequate compiler, and opmMP directives are required to make proper use of shared memory. [Pg.62]

Algorithm robustness, using the openMP directives for shared memory parallel computing, is also investigated and implemented for function root-finding in... [Pg.517]


See other pages where Parallel computer shared memory is mentioned: [Pg.472]    [Pg.95]    [Pg.96]    [Pg.239]    [Pg.20]    [Pg.20]    [Pg.471]    [Pg.77]    [Pg.78]    [Pg.114]    [Pg.213]    [Pg.232]    [Pg.235]    [Pg.247]    [Pg.257]    [Pg.260]    [Pg.280]    [Pg.295]    [Pg.296]    [Pg.297]    [Pg.297]    [Pg.27]    [Pg.29]    [Pg.1107]    [Pg.278]    [Pg.103]    [Pg.746]    [Pg.166]    [Pg.34]    [Pg.17]    [Pg.19]    [Pg.59]    [Pg.66]    [Pg.399]   
See also in sourсe #XX -- [ Pg.336 , Pg.337 , Pg.342 ]




SEARCH



Computer memory

Parallel computing

Shared

Shares

Sharing

© 2024 chempedia.info