Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Shared memory processors

As noted above, one of the goals of NAMD 2 is to take advantage of clusters of symmetric multiprocessor workstations and other non-uniform memory access platforms. This can be achieved in the current design by allowing multiple compute objects to run concurrently on different processors via kernel-level threads. Because compute objects interact in a controlled manner with patches, access controls need only be applied to a small number of structures such as force and energy accumulators. A shared memory environment will therefore contribute almost no parallel overhead and generate communication equal to that of a single-processor node. [Pg.480]

Supercomputers from vendors such as Cray, NEC, and Eujitsu typically consist of between one and eight processors in a shared memory architecture. Peak vector speeds of over 1 GELOP (1000 MELOPS) per processor are now available. Main memories of 1 gigabyte (1000 megabytes) and more are also available. If multiple processors can be tied together to simultaneously work on one problem, substantially greater peak speeds are available. This situation will be further examined in the section on parallel computers. [Pg.91]

The most commercially successful of these systems has been the Convex series of computers. Ironically, these are traditional vector machines, with one to four processors and shared memory. Their Craylike characteristics were always a strong selling point. Interestingly, SCS, which marketed a minisupercomputer that was fully binary compatible with Cray, went out of business. Marketing appears to have played as much a role here as the inherent merits of the underlying architecture. [Pg.94]

In working through process control examples, we found that many calculations, data checks, rate checks and other computationally intensive tasks are done at the first level of inference. Considerations of computational efficiency led to a design utilizing two parallel processors with a shared memory (Figure 1). One of the processors is a 68010 programmed in C code. This processor performs computationally intensive, low level tasks which are directed by the expert system in the LISP processor. [Pg.71]

Figure 1. Design for the LMI system for process control using two parallel processors with a shared memory. Figure 1. Design for the LMI system for process control using two parallel processors with a shared memory.
The shared memory system is the most expensive of the four generalized architectures, with the global bus system coming in at close second. The fully interconnected system is about 5 times more cost-effective than a global bus approach for a 30-processor system however, the ring system is superior to all when the process is partitioned to take advantage of the unique bandwidth characteristics that a ring connected architecture provides. [Pg.252]

For MIMD shared-memory environments, the evaluation facility is likely to be based on per-processor events and states that are explicitly defined by the programmer and/or automatically defined by the programming environment. There is wide variation in the level of detail that can be obtained by automatic instrumentation. Information may be provided at the level of proce-... [Pg.236]

Implementing the shared-memory vector/parallel algorithms developed by Mertz et al. (fgr evaluation of the potential energies and forces, generation of the nonbonded neighbor list, and satisfaction of holonomic constraints) into CHARMM and AMBER resulted in near-linear speed-ups on eight processors of a Cray Y-MP for the forces and neighbor lists. For the holonomic constraints, speed-ups of 6.0 and 6.4 were obtained for the SHAKE and matrix inversion method, respectively. [Pg.271]

Shared Memory A mechanism that allows any processor to access any memory element using the same method and without the explicit cooperation of any other processor. [Pg.287]

The KSR/Series consists of the KSRl and more recent KSR2. These MIMD parallel MPP systems have globally addressable virtual shared memory. The hierarchical memory has strong coherency and is patented under the name ALLCACHE. A cell is made from Kendall Square s custom and proprietary RISC processor and 32 Mbyte of ALLCACHE memory. The 64-bit KSR2... [Pg.293]

The shared-memory programming model on the KSR allows a straightforward port of large applications to at least a single processor. KSR claims that an optimized port of AMBER took only three days. KSR has two full-time computational chemists in-house with other systems development staff who were trained as computational chemists. [Pg.300]

Given these characteristics, it is evident that large-scale semiempirical SCF-MO calculations are ideally suited for vectorization and shared-memory parallelization the dominant matrix multiplications can be performed very efficiently by BLAS library routines, and the remaining minor tasks of integral evaluation and Fock matrix construction can also be handled well on parallel vector processors with shared memory (see Ref. [43] for further details). The situation is less advantageous for massively parallel (MP) systems with distributed memory. In recent years, several groups have reported on the hne-grained parallelization of their semiempirical SCF-MO codes on MP hardware [76-79], but satisfactory overall speedups are normally obtained only for relatively small numbers of nodes (see Ref. [43] for further details). [Pg.571]

Fig. 3. Shared memory systems. A shared memory system consists of multiple processors that are able to access a large central memory directly through a very fast bus system. Fig. 3. Shared memory systems. A shared memory system consists of multiple processors that are able to access a large central memory directly through a very fast bus system.
In general, these systems consist of clusters of computers, so called nodes, which are connected via a high-performance communication network (Fig. 4). Using commodity state-of-the-art calculation nodes and network technology, these systems provide a very cost efficient alternative to shared memory systems for dividable, numerical computational intensive problems that have a low communication/calculation ratio. On the contrary, problems with high inter-processor communication demands can lead to network congestion, which is decreasing the overall system performance. If more performance is... [Pg.201]


See other pages where Shared memory processors is mentioned: [Pg.20]    [Pg.20]    [Pg.472]    [Pg.477]    [Pg.95]    [Pg.95]    [Pg.95]    [Pg.96]    [Pg.96]    [Pg.159]    [Pg.239]    [Pg.20]    [Pg.30]    [Pg.654]    [Pg.77]    [Pg.250]    [Pg.64]    [Pg.78]    [Pg.114]    [Pg.232]    [Pg.235]    [Pg.243]    [Pg.248]    [Pg.254]    [Pg.257]    [Pg.259]    [Pg.260]    [Pg.264]    [Pg.280]    [Pg.280]    [Pg.290]    [Pg.295]    [Pg.297]    [Pg.1107]    [Pg.746]    [Pg.201]    [Pg.166]   


SEARCH



Processors

Shared

Shared memory processors programming

Shares

Sharing

© 2024 chempedia.info