Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parallel processors

Transition rules are applied in parallel to all cells, so the state to be adopted by each cell in the next cycle is determined before any of them changes its current state. Because of this feature, CA models are appropriate for the description of processes in which events occur simultaneously and largely independently in different parts of the system and, like several other AI algorithms, are well-suited to implementation on parallel processors. Although the states of the cells change as the model runs, the rules themselves remain unchanged throughout a simulation. [Pg.179]

Steps 2 and 3, in which the environment plays no direct part, can be run independently for each prototype rule in the CS thus the system is well suited to implementation on parallel processor machines. In the next section, we consider the components that form the system in more detail. [Pg.269]

Parallel computing requires software made suitable for operating on parallel processors. Decomposition of the flow domain and attributing each domain to a separate processor is the common procedure. A fast communication between the various processors is crucial, not to partly spoil the gain obtained by going parallel. [Pg.174]

Hartmann et al. (2006) reported very detailed simulation results (see also Hartmann, 2005) (Fig. 9). Their LB simulation was restricted to a lab-scale vessel 10 L in size only for which 2403 lattice cells a bit smaller than 1 mm2 were used the temporal resolution was 25 ps only. A set of 7 million mono-disperse spherical particles 0.3 mm in size was released in the upper 10% part of the vessel. At the moment of release, the local volume fraction amounted to 10%. The particle properties were those of calcium chloride. The simulation was carried out on 30 parallel processors of an SGI Altrix 3700 system and required for 6 weeks for 100 impeller revolutions. [Pg.197]

In working through process control examples, we found that many calculations, data checks, rate checks and other computationally intensive tasks are done at the first level of inference. Considerations of computational efficiency led to a design utilizing two parallel processors with a shared memory (Figure 1). One of the processors is a 68010 programmed in C code. This processor performs computationally intensive, low level tasks which are directed by the expert system in the LISP processor. [Pg.71]

An expert, given time to do so, may utilize calculations to develop inference results. For example, a material balance calculation around a process unit may indicate a measurement inconsistency. To mimic this expertise, general mathematical operations on combinations of measurements or functions of measurements are implemented in the parallel processor also. [Pg.71]

Figure 1. Design for the LMI system for process control using two parallel processors with a shared memory. Figure 1. Design for the LMI system for process control using two parallel processors with a shared memory.
For a polyatomic reactant with many degrees of freedom the numerical calculations required to execute the program outlined above can easily achieve a scale that is impossible to handle even with a vectorized parallel processor supercomputer. The simplest approximation that reduces the scale of the numerical calculations is the neglect of some subset of the internal molecular motions, but this approximation usually leads to considerable error. A more sophisticated and intuitively reasonable approximation [72, 73] is to reduce the system dimensionality by placing constraints on the values of the internal molecular coordinates (instead of omitting them from the analysis). [Pg.262]

I. S. Duff and J. K. Reid, Proceedings of the 2nd International Conference on Vector and Parallel Processors in Computational Science, in Comput. Phys. Commun, 37 (1-3), North-Holland, Amsterdam, The Netherlands, 1985. [Pg.281]

Computer architectures have evolved over the years from the classic von Neumann architecture into a variety of forms. Great benefits to operating speed have accrued. The major contributions to speed have been the introductions of parallel processors and of pipelining. For many years, these innovations were transparent to the programmer. For example, to program in Fortran to run on a CDC 6600 (10, one did not take cognizance of the existence of multiple functional units, nor did one consider the I/O channels when writing Fortran applications for the IBM 360 series (2). This was because the parallel processors were hidden behind appropriate hardware or software. [Pg.238]

The system was equilibrated for more than 100 ps the equilibration was checked by monitoring energy trajectories and root mean square (rms) deviations from the X-ray structure. Using 8 parallel processors, it took 1 hour to calculate a 2 ps simulation. The restraints on the system during the molecular dynamics calculation were released in a stepwise mode, first on side chains and then on other atoms. The rms deviation of all Ca atoms between the cytochrome b subimit in the X-ray structure and the unrestrained model after 120 ps was 1.38 A. [Pg.125]

I. Yamazaki, S. Okazaki, T. Minami, N. Ohta, Optically Switching Parallel Processors by Means of Langmuir-Blodgett Multilayer Films , Appl. Opt, 33, 7561 (1994)... [Pg.173]

Einally, we draw attention to several review articles in this area. In 1986 Lowdin 23 considered various aspects of the historical development of computational QC in view of the development of both conventional supercomputers and large-scale parallel computers. More recently, Weineri24 presented a discussion on the programming of parallel computers and their use in molecular dynamics simulations, free energy perturbation, and large scale ab initio calculations, as well as the use of very elaborate graphical display programs in chemistry research. We also note a review on the use of parallel processors in... [Pg.245]

E. Lusk, J. Boyle, R. Butler, T. Disz, B. Glickfeld, R. Overbeek, J. Patterson, and R. Stevens, Portable Programs for Parallel Processors, Holt, Rinehart 8c Winston, Chicago, 1987. [Pg.302]

A. H. L. Emmen, A survey of Vector and Parallel Processors, in Algorithms and Applications on Vector and Parallel Computers, H. J. J. te Ride, T. J. Dekker, and H. A. van der Vorst, eds., vol. 3 of Special Topics in Supercomputing, North Holland-Elsevier Science Publishers, 1987. [Pg.271]

Molecular Modelling and Molecular Mechanics Calculations with Personal Parallel Processors. ... [Pg.427]

The hardware limitations of the program mainly rest with the creation of the lattice in memory. The lattices used for the final results, up to 8 million water molecules in size (in the form of a cube with length, width, and height of 110, containing 7986000 molecules), were created on a supercomputer with 2 GB of RAM. Thus, this aspect of the program depends directly on the amount of RAM available and is indefinitely expandable. The supercomputer used for calculations was the U2 machine at the Center for Computational Research at the University of Buffalo. In order to run the program on a parallel-processor supercomputer, we had to use a computer-specific script. [Pg.327]

The algorithm defined by Eqs. (27)—(29) can be effectively coded for both vector and parallel processor computers. [Pg.97]

Kirk, D.B., Hwu, W.W. Programming Massively Parallel Processors, Morgan Kaufmann Publishers, Burlington, 2010. [Pg.19]

Hardware. Computers built to work like neural nets are called "parallel processors". A parallel processor uses a large number of small, interconnected processing units rather than a single CPU. Prominent among these is Thinking Machines Corporation s "Connection Machine", which can realize a variety of neural net models. It is programmable in LISP, and is well-suited to database tasks. Hecht-Nielsen produces the ANZA board, a co-processor which allows the PC/AT to emulate a parallel processor. [Pg.69]

Schmollinger, M., K. Nieselt, M. Kaufmann, and B. Morgenstein. 2004. DIALIGN P Fast pair-wise and multiple sequence alignment using parallel processors. BMC Bioinformatics 5 128. [Pg.74]

Microelectronics Electrochemical phenomena are essential in the manufacture of electronic and photonic systems as well as responsible for the quality and reliability of such systems. Applications and research are outlined in areas that include manufacture of microcircuits, interconnecting networks, lightwave communication devices, parallel processors, content-addressable memories, and nerve-electronic interfaces. [Pg.58]

Because multilayer interconnecting networks are an important element of advanced chips and parallel processors, it is essential that an understanding of the corrosion processes that affect their reliability be developed. Needed are methods to quantify metal corrosion and ion transport in polymers and means to identify electrochemically reliable metal-polymer systems. [Pg.100]


See other pages where Parallel processors is mentioned: [Pg.153]    [Pg.717]    [Pg.2]    [Pg.239]    [Pg.96]    [Pg.2]    [Pg.232]    [Pg.27]    [Pg.153]    [Pg.199]    [Pg.167]    [Pg.282]    [Pg.635]    [Pg.117]    [Pg.239]    [Pg.550]    [Pg.551]    [Pg.268]    [Pg.249]    [Pg.250]    [Pg.635]    [Pg.359]    [Pg.99]    [Pg.23]    [Pg.153]    [Pg.101]   
See also in sourсe #XX -- [ Pg.63 ]

See also in sourсe #XX -- [ Pg.495 ]




SEARCH



Processors

© 2024 chempedia.info