Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Collective Communication Operations

Kiclmann, T. Hofman, R.H.F., Bal, H.E., Plaat, A., Bhoedjang, R.A.F. (1993) "Magpie MPFs collective communication operations via clustered wide area systems In Proc. Symposium on Principles and Practice of Parallel Programming, Sand Diego. [Pg.379]

Algorithms and cost analyses for a number of collective communication operations have been discussed in some detail by Grama et al. A comprehensive performance comparison of implementations of MPI on different network interconnects (InfiniBand, Myrinet , and Quadrics ), including both micro-level benchmarks (determination of latency and bandwidth) and application-level benchmarks, has been carried out by Liu et al. A discussion of the optimization of collective communication in MPICH, including performance analyses of many collective operations, has been given by Thakur et al. ... [Pg.56]

Thakur, R., R. Rabenseifner, and W. Gropp. Optimization of collective communication operations in MPICH. Int.. High Perform. C. 19 49-66,2005. [Pg.56]

The values for these machine-specific parameters can be somewhat dependent on the application. For example, the flop rate can vary significantly depending on the type of operations performed. The accuracy of a performance model may be improved by using values for the machine-specific parameters that are obtained for the type of application in question, and the use of such empirical data can also simplify performance modeling. Thus, if specific, well-defined types of operations are to be performed in a parallel program (for instance, certain collective communication operations or specific computational tasks), simple test programs using these types of operations can be written to provide the appropriate values for the pertinent performance parameters. We will show examples of the determination of application specific values for a, and y in section 5.3.2. [Pg.81]

Collective communication operations can reduce the scalability of a parallel program by introducing a communication bottleneck. Consider, for example, an algorithm requiring floating point operations and using... [Pg.105]

We will illustrate parallel matrix-vector multiplication algorithms using collective communication in section 6.4, and detailed examples and performance analyses of quantum chemistry algorithms employing collective communication operations can be found in sections 8.3,9.3, and 10.3. [Pg.105]

As opposed to the collective communication operation used in the integral transformation, this communication step is not scalable the communication time will increase with p, or, if the latency term can be neglected, remain nearly constant as p increases. This communication step is therefore a potential bottleneck, which may cause degrading parallel performance for the LMP2 procedure as the number of processes increases. To what extent this will happen depends on the actual time required for this step compared with the other, more scalable, steps of the LMP2 procedure, and we will discuss this issue in more detail in the following section. [Pg.174]

Having introduced the basic MPI calls for managing fhe execution environment, let us now discuss how to use MPI for message-passing. MPI provides support for both point-to-point and collective communication operations. [Pg.183]

A number of the most widely used collective communication operations provided by MPI are listed in Table A.3. The collective operations have been grouped into operations for data movement only (broadcast, scatter, and gather operations), operations that both move data and perform computation on data (reduce operations), and operations whose only function is to synchronize processes. In the one-to-all broadcast, MPl Bcast, data is sent from one process (the root) to all other processes, while in the all-to-all broadcast, MPI A11 gather, data is sent from every process to every other process (one-to-all and all-to-all broadcast operations are discussed in more detail in section 3.2). The one-to-all scatter operation, MPI Scatter, distributes data from the root process to all other processes (sending different data to different processes), and the all-to-one gather, MPI Gather, is the reverse operation, gathering data from all processes onto the root. [Pg.185]

To illustrate the use of collective commimication operations, we show in Figure A.2 an MPI program that employs the collective communication operations MPI.Scatter and MPi.Reduce the program distributes a matrix (initially located at the root process) across all processes, performs some computations on the local part of the matrix on each process, and performs a global summation of the data computed by each process, putting the result on the root process. [Pg.186]


See other pages where Collective Communication Operations is mentioned: [Pg.236]    [Pg.237]    [Pg.49]    [Pg.49]    [Pg.50]    [Pg.50]    [Pg.54]    [Pg.81]    [Pg.82]    [Pg.104]    [Pg.104]    [Pg.173]    [Pg.177]   


SEARCH



Collection operations

© 2024 chempedia.info