Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Minimization parallel computing

Parallel computers can be divided into two classes, based on whether the processors in the system have their own private memory or share a common memory. In a distributed memory system, the processors communicate with each other by sending and receiving messages through a communication network connecting all the processors. The problem to be solved must be explicitly partitioned by the programmer onto the various processors in such a way that load balancing is maintained and communication between processors is minimized and well ordered. For some problems it may not be easy or even... [Pg.1106]

Single-processor machine or unavailable parallel computing. Specifically, parallel computing is unsuitable when the function is very simple and unavailable when the minimization is within a calculation where the parallelization is already used. [Pg.44]

It is possible and helpful to use parallel computing (for one-dimensional minimization). [Pg.44]

Potentially lower power consumption. Regular parallel architectures permit trading off clock speed and parallel computation in a much more flexible way than irregular ones. This issue is important when power consumption is to be minimized. [Pg.10]

A. Benaini and Y. Robert. Space-time-minimal systolic arrays for gaussian elimination and the algebraic path problem. Parallel Computing, 15, pages 211-225, 1990. [Pg.66]

Finally, there is the issue of vectorization, also referred to as scalar vs. parallel computing. Serial computers execute instructions sequentially, in specific order parallel machines execute multiple instructions simultaneously. Often, different flow domains are apportioned to different machines, and message passing interfaces must be designed so that these domains communicate with each other in an optimal way that minimizes computation time. Point relaxation gave way to line methods when serial computers were... [Pg.152]

All papers mentioned above apply the rules in the maximally parallel mode in each computation step, the chosen multiset of rules cannot be extended anymore, i.e., no further rule could be added to this chosen multiset of rules in such a way that the resulting extended multiset still could be applied. Recently, another strategy of applying rules in parallel was introduced, the so-called minimal parallelism [5] in each computation step, the chosen multiset of rules to be applied in parallel cannot be extended by any rule out of a set of rules from which no rule has been chosen so far for this multiset of rules in such a way that the resulting extended multiset still could be applied (this is not the only way to interpret the idea of minimal parallelism, e.g., see [6] for other possibilities, yet the results elaborated in this paper hold true for all the variants of minimal parallelism defined there). This introduces an additional degree of non-determinism in the system evolution, but still computational completeness and polynomial solutions to SAT were obtained in the new framework by using P systems with active membranes, with three polarizations and division of only elementary membranes. [Pg.63]

In this paper we continue the study of P S3 tems working in the minimally parallel way, and we prove that the polarizations can be avoided, at the price of using all five t3rpes of rules for computational completeness and even the division of non-elementary membranes for computational efficiency. On the other hand, in the proof for establishing computational completeness, in all types of rules we can restrict their form to handling only single objects in the membranes (we call this the one-normal form for P systems with active membranes). [Pg.63]

We now improve the equalities from Theorem 1 in certain respects, starting with proving the computational completeness of P systems with active membranes in the one-normal form when working in the maximally parallel mode, and then we extend this result to the minimal parallelism. [Pg.66]

The previous proof can easily be changed in order to obtain the computational completeness of P systems in the one-normal form also in the case of minimal parallelism. Specifically, we can introduce additional membranes in order to avoid having two or more evolution rules to be applied in the same region or one or more evolution rules and one rule of types (b) - (e) which involve the same membrane ... [Pg.72]

To accommodate the possible chemical reactions of the ongoing corrosion process, the calculated concentrations at c (t -i- At) (cfin Fig. 32.3) must be corrected according to the local thermodynamic equilibrium. For this purpose, the concentrations c (t + At) are transferred into a thermodynamic subroutine ThermoScript [10], which contains the commercial program ChemApp [11]. ChemApp is based on a numerical Gibbs energy minimization routine in combination with tailor-made databases [12]. In order to avoid excessive calculation times, the parallel-computing system PVM (parallel virtual machine) is used, i.e., ThermoScript distributes the individual... [Pg.573]

The complexity analysis shows that the load is evenly balanced among processors and therefore we should expect speedup close to P and efficiency close to 100%. There are however few extra terms in the expression of the time complexity (first order terms in TV), that exist because of the need to compute the next available row in the force matrix. These row allocations can be computed ahead of time and this overhead can be minimized. This is done in the next algorithm. Note that, the communication complexity is the worst case for all interconnection topologies, since simple broadcast and gather on distributed memory parallel systems are assumed. [Pg.488]


See other pages where Minimization parallel computing is mentioned: [Pg.133]    [Pg.509]    [Pg.491]    [Pg.461]    [Pg.284]    [Pg.72]    [Pg.64]    [Pg.267]    [Pg.78]    [Pg.683]    [Pg.385]    [Pg.1160]    [Pg.104]    [Pg.191]    [Pg.11]    [Pg.429]    [Pg.430]    [Pg.645]    [Pg.87]    [Pg.313]    [Pg.123]    [Pg.62]    [Pg.65]    [Pg.3243]    [Pg.3702]    [Pg.183]    [Pg.133]    [Pg.472]    [Pg.203]    [Pg.314]    [Pg.72]    [Pg.293]    [Pg.32]    [Pg.675]    [Pg.543]    [Pg.62]    [Pg.137]    [Pg.91]    [Pg.98]   
See also in sourсe #XX -- [ Pg.44 , Pg.45 ]




SEARCH



Parallel computing

© 2024 chempedia.info