Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parallel programming

Kale, L. V. The Chare Kernel parallel programming language and system. In Proceedings of the international conference on parallel processing vol. II. CRC Press, Boca Raton, Florida, 1990. [Pg.482]

Kale, L. V., Bhandarkar, M., Jagathesan, N., Krishnan, S., Yelon, J. Converse An interoperable framework for parallel programming. In Proceedings of the 10th international parallel processing symposium. IEEE Computer Society Press, Los Alamitos, California, 1996. [Pg.482]

W. Gropp, E. Lusk and A. Skjellum, Using MPI Portable Parallel Programming with Message-Passing Interface, Scientific and Engineering Computation series, 1994. [Pg.492]

The answers to these questions drive the next issue—of when to switch vendors. The needs of the project may drive the sponsor to run one or more parallel programs with other vendors one campaign to produce materials for first time in human studies, another to produce phase II clinical trail supplies materials, and a third to determine the impact of the different campaigns on the clinical program. Now what started as one sponsor-vendor relationship for one project has turned into three. Multiply this by the number of projects your company is managing at any given time and you will begin to understand the extent of the impact of such a decision. [Pg.361]

Implementation of the whole set of integral algorithms within the ARIADNE molecular program [66a] as well as MOLSIMIL molecular Quantum Similarity code [66b], developed in our Laboratory is under way. A discussion on the sequential, vector and parallel programming features of the CETO integral calculation will be ptablished elsewhere. Perhaps other available ETO functions, left unexplored on this paper, will be studied in the near future and the... [Pg.230]

In designing a complex parallel program, it is useful to think of any parallel computer in terms of a NUMA model in which all memory, anywhere in the system, can be accessed by any processor as needed. One can then focus independently on the questions of memory access methods (which determine coding style for an application) and memory access costs (which ultimately determine program efficiency). [Pg.213]

Parallel programming paradigms and environments differ in many aspects. These variations arise from at least four factors ... [Pg.225]

Rather than review the extensive literature of parallel programming environments, we examine a few that are of most utility in computational chemistry or are in common use, or represent current directions of research. [Pg.226]

The Linda model supports many different parallel programming paradigms. At its simplest, the distributed-data environment offers an improvement over raw message passing by promoting an uncoupled programming style. This is because the synchronization of processes resulting from access to distributed data is minimal. However, arbitrarily complex shared and distributed data structures may be created. [Pg.230]

The performance of a parallel program is important. If it were not, it would not make sense to go to the trouble and expense of parallelizing the program in the first place. Thus, parallel programs must first be designed so that good performance is possible, then coded to actually achieve that potential. [Pg.235]

Once a program has been implemented, measurement-based evaluation is useful to refine its implementation by highlighting sources of inefficiency like load imbalance or high communication costs. Performance visualization, based on recorded program behavior, is a particularly powerful method for detecting unexpected behavior within a parallel program. [Pg.236]

At the present time, performance prediction is primarily a conceptual and mathematical process that is not supported by specific tools. Illustrations of the process can be found in many computer science articles that discuss such algorithms.Fiowever, most suppliers of parallel computers and parallel programming software provide some sort of performance evaluation tools matched to their programming environment. There is wide variation in their basic capabilities and assumptions, as well as in the breadth of features and ease of use, but three main classes of capabilities can be identified. [Pg.236]

We now reexamine message passing as it pertains to software development and the interface of application software and library software. Compiler-managed parallelism is not yet ready for prime time. This means that efficient parallel programs are usually coded by hand, typically using point-to-point message passing and simple collective communications. There are many problems associated with this approach. [Pg.237]

Deadlock A halting condition in a parallel program when two or more processors are waiting for data from each other. For example, if processor a is waiting for a message from processor b and processor b is waiting for a message from processor a, then a and b are deadlocked. [Pg.284]

N. Carriero and D. Gelernter, How to Write Parallel Programs A First Course, MIT Press,... [Pg.302]

D. Y. Cheng, NASA Ames Research Center Technical Report 5/756, Moffet Field, CA, 1992. A Survey of Parallel Programming Languages and Tools. [Pg.302]

S. Hiranandani, K. Kennedy, and C. Tseng, in Compilers and Runtime Software for Scalable Multiprocessors, J. Saltz and P. Mehrotra, Eds., Elsevier, Amsterdam, 1991. Compiler Support for Machine-Independent Parallel Programming in FORTRAN D. [Pg.304]

User s Guide to the P4 Parallel Programming System. P4 is set of portable parallel programming routines being distributed by Ewing Lusk of the Mathematics and Computer Science division at Argonne. P4 is available by anonymous ftp from info.mcs.anl.gov and by electronic mail from netlib ornl.gov. [Pg.304]

I. Foster and S. Taylor, Strand, New Concepts in Parallel Programming, Prentice-Hall, Englewood Cliffs, NJ, 1990. [Pg.305]

K. M. Chandy and S. Taylor, An Introduction to Parallel Programming, Jones and Bartlett, Boston, 1992. [Pg.305]

I. Foster and S. Tuecke, Argonne National Laboratory Report ANL 91/32, Argonne, IL, January 1990. Parallel Programming with PCN. [Pg.305]

E. Lusk, Theor. Chim. Acta, 84, 377 (1993). Performance Visualization for Parallel Programs. [Pg.305]


See other pages where Parallel programming is mentioned: [Pg.227]    [Pg.477]    [Pg.482]    [Pg.132]    [Pg.314]    [Pg.95]    [Pg.96]    [Pg.20]    [Pg.139]    [Pg.268]    [Pg.193]    [Pg.553]    [Pg.74]    [Pg.208]    [Pg.214]    [Pg.224]    [Pg.225]    [Pg.226]    [Pg.231]    [Pg.236]    [Pg.241]    [Pg.251]    [Pg.270]    [Pg.282]    [Pg.294]    [Pg.297]    [Pg.304]   
See also in sourсe #XX -- [ Pg.224 , Pg.225 , Pg.235 , Pg.241 , Pg.260 ]




SEARCH



Parallel Programming Languages and Environments

Parallel program design

Parallel programming paradigms

Parallel programs

Parallel programs collective communication

Parallel programs performance modeling

Parallel-track program

Performance Measures for Parallel Programs

Performance measures, parallel programs

© 2024 chempedia.info