Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Shared memory system

System Interconnect Reliability From the standpoint of reliability, the shared memory system in the global bus both have problems in the area of single-point failures If a failure of the bus or the central memory occurs, the entire system is incapacitated A ring system, when bypass hardware is employed, demonstrates very good fault tolerant characteristics. [Pg.250]

System Interconnect Expandability From the standpoint of expansion limitations, the shared memory system has problems in that the number of ports are fixed. Expanders can be used to alleviate this problem to some degree, but physical construction problems are ultimately met. Also, the memory bandwidth of the shared memory system is fixed and is relatively slow, thus limiting the degree of practical expansion. [Pg.250]

The shared memory system is the most expensive of the four generalized architectures, with the global bus system coming in at close second. The fully interconnected system is about 5 times more cost-effective than a global bus approach for a 30-processor system however, the ring system is superior to all when the process is partitioned to take advantage of the unique bandwidth characteristics that a ring connected architecture provides. [Pg.252]

Fig. 3. Shared memory systems. A shared memory system consists of multiple processors that are able to access a large central memory directly through a very fast bus system. Fig. 3. Shared memory systems. A shared memory system consists of multiple processors that are able to access a large central memory directly through a very fast bus system.
In general, these systems consist of clusters of computers, so called nodes, which are connected via a high-performance communication network (Fig. 4). Using commodity state-of-the-art calculation nodes and network technology, these systems provide a very cost efficient alternative to shared memory systems for dividable, numerical computational intensive problems that have a low communication/calculation ratio. On the contrary, problems with high inter-processor communication demands can lead to network congestion, which is decreasing the overall system performance. If more performance is... [Pg.201]

The computational problems involving multi-million particle ensembles, found in modeling mesoscopic phenomena, were considered only recently as the typical problems. Rapid increase of computational power of modem processors and growing popularity of coarse-grained discrete particle methods, such as dissipative particle dynamics, fluid particle model, smoothed particle hydrodynamics and LEG, allow for the modehng of complex problems by using smaller shared-memory systems [101]. [Pg.769]

Boryczko, K., Dzwinel, W., and Yuen D.A., ModeUng heterogeneous mesoscopic fluids in irregular geometries using shared memory systems, MolecuL Siimd., 31 (1), 45-56, 2005. [Pg.777]

DM-MIMD MPP machines are undoubtedly the fastest-growing class in the family of supercomputers, although this type of machine is more difficult to deal with than shared-memory machines and processor-array machines. For shared-memory systems the data distribution is completely transparent to the user. This is quite different for DM-MIMD systems, where the user has to distribute the data over the processors, and also the data exchange between processors has to be performed explicitly. The initial reluctance to use DM-MIMD machines has decreased lately. This is partly due to the now-existing standards for communication software such as MPI (message passing interface) and PVM (parallel virtual machine) and is partly because, at least theoretically, this class of systems is able to outperform all other types of machines. [Pg.101]

DM-MIMD systems have several advantages the bandwidth problem that haunts shared-memory systems is avoided because the bandwidth scales up automatically with the number of processors. Furthermore, the speed of the memory, which is another critical issue with shared-memory systems (to get a peak performance that is comparable to that of DM-MIMD systems, the processors of shared-memory machines should be very fast and the speed of the memory should match it), is less important for DM-MIMD machines because more processors can be configured without the aforementioned bandwidth problems. [Pg.102]

Parallelization for shared-memory systems is a relatively easy task, at least compared to that for distributed-memory systems. The reason lies in the fact that in shared-memory systems the user does not have to keep track of where the data items of a program are stored they all reside in the same shared memory. For such machines often an important part of the work in a program can be parallelized, vectorized, or both in an automatic fashion. Consider, for instance, the simple multiplication of two rows of numbers several thousand elements long. This is an operation that is abundant in the majority of technical/scientific programs. Expressed in the programming language Fortran 90, this operation would look like... [Pg.103]

As with shared-memory systems, also for distributed-memory systems application software libraries have been, and are being, developed. ScaLAPACK is a distributed-memory version of the LAPACK library mentioned above. Other application libraries may in turn rely on ScaLAPACK, for instance, in solving partial differential equations as in the PETSc package. Much of this software can be found on the World Wide Web, e.g., via http //www. netlib.org. [Pg.104]


See other pages where Shared memory system is mentioned: [Pg.232]    [Pg.200]    [Pg.201]    [Pg.201]    [Pg.11]    [Pg.12]    [Pg.12]    [Pg.12]    [Pg.17]    [Pg.114]    [Pg.769]    [Pg.96]    [Pg.103]    [Pg.103]    [Pg.104]   
See also in sourсe #XX -- [ Pg.250 , Pg.251 ]




SEARCH



Memory systems

Shared

Shares

Sharing

© 2024 chempedia.info