Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Floating Point Operation

A common acronym is MFLOPS, millions of floating-point operations per second. Because most scientific computations are limited by the speed at which floating point operations can be performed, this is a common measure of peak computing speed. Supercomputers of 1991 offered peak speeds of 1000 MFLOPS (1 GFLOP) and higher. [Pg.88]

At best the ST-100 is capable of 100 million floating point operations per second (MFLOPS) however for reasonably large macro-... [Pg.124]

Arc-length continuation, steady states of a model premixed laminar flame, 410 Architecture, between parallel machines, 348 Arithmetic control processor, ST-100, 125 Arithmetic floating point operations,... [Pg.423]

Fig. 4.4. Comparison of the computing effort, expressed in thousands of floating point operations (Aflop), required to factor the Jacobian matrix for a 20-component system (Nc = 20) during a Newton-Raphson iteration. For a technique that carries a nonlinear variable for each chemical component and each mineral in the system (top line), the computing effort increases as the number of minerals increases. For the reduced basis method (bottom line), however, less computing effort is required as the number of minerals increases. Fig. 4.4. Comparison of the computing effort, expressed in thousands of floating point operations (Aflop), required to factor the Jacobian matrix for a 20-component system (Nc = 20) during a Newton-Raphson iteration. For a technique that carries a nonlinear variable for each chemical component and each mineral in the system (top line), the computing effort increases as the number of minerals increases. For the reduced basis method (bottom line), however, less computing effort is required as the number of minerals increases.
We can prove that the above algorithm converges in polynomial time (i.e., the number of floating-point operations is proportional to a polynomial in the problem sizes m and n) by choosing appropriately cn., and a. See Refs. [Pg.113]

The computational cost of each major iteration is at most proportional to mn + floating-point operations, and the maximum number... [Pg.113]

For the SDP problems arising from the variational calculation, in which we are interested, the theoretical number of floating-point operations required by parallel Primal-Dual interior-point method-based software scales as... [Pg.116]

Theoretical Number of Floating-Point Operations per Iteration (FLOPI), Maximum Number of Major Iterations, and Memory Usage for the Parallel Primal-Dual Interior-Point Method (pPDIPM) and for the First-Order Method (RRSDP) Applied to Primal and Dual SDP Formulations". [Pg.116]

From the table, we can see that the first-order method usually requires fewer floating-point operations and memory storage if compared with the Primal-Dual interior-point method. The unique drawback of the former method is that we cannot guarantee a convergence of the method in a certain time frame. [Pg.117]

We can also conclude that if we employ the Primal-Dual interior-point method, the dual SDP formulation provides a more reduced mathematical description of the variational calculation of the 2-RDM than employing the primal SDP formulation. The former formulation also allows us to reach a faster computational solution. On the other hand, the number of floating-point operations and the memory storage of RRSDP do not depend on the primal or dual SDP formulations. [Pg.117]

The maximum vector capability occurs for matrix multiplication, for which the measured time on the CRAY-1 is twenty times faster than the best hand coded routines on the CDC 7600 or IBM 360/195. The maximum rate is circa. 135 Mflops (Millions of floating-point operations per second) for matrices that have dimensions which are a multiple of 64, the vector register size. The rate of computation for matrix multiplication is shown in figure 1 as a function of matrix size. [Pg.10]

Finally, most doubly or triply subscripted array operations can execute as a single vector instruction on the ASC. To demonstrate the hardware capabilities of the ASC,the vector dot product matrix multiplication instruction, which utilizes one of the most powerful pieces of hardware on the ASC, is compared to similar code on an IBM 360/91 and the CDC 7600 and Cyber 174. Table IV lists the Fortran pattern, which is recognized by the ASC compiler and collapsed into a single vector dot product instruction, the basic instructions required and the hardware speeds obtained when executing the same matrix operations on all four machines. Since many vector instructions in a CP pipe produce one result every clock cycle (80 nanoseconds), ordinary vector multiplications and additions (together) execute at the rate of 24 million floating point operations per second (MFLOPS). For the vector dot product instruction however, each output value produced represents a multiplication and an addition. Thus, vector dot product on the ASC attains a speed of 48 million floating point operations per second. [Pg.78]

The coding in Table I illustrates the central problem of simulations. The number of pairs is N(N—1)/2. The number of floating point operations (FLOPS) per pair is about 25, assuming the branches are executed 50% of the time. Thus for 100 atoms (a minimal simulation) we will need 1.2 x 10 FLOPS for a single time step. The number of memory and indexing operations is similarly large. Typically one needs to execute between 10 and 10 time steps. Thus the simulations are limited by the number of floating point operations one can afford. [Pg.129]

Tp is the time in seconds to execute the pairwise sum in equation (l) T is the total time in seconds per molecular dynamic step. MFLOPS is the number of million floating point operations per second of the code in Table I, assuming that it contains 25 FLOPS (i.e., MFLOPS = 1.25 x 10 x N(N-1)/Tp) where N is the total number of atoms, 81 or 648. The asterisk on CRAY-1 indicates a vectorized version of CLAMPS was used. [Pg.132]

The branch for the case when the squared pair separation is outside the table will inhibit vectorization. The last element of the table has been changed to zero and all occurrence outside the table are truncated to LMAX. The rest of the code, which is not executed on the VAX or CDC 7600, is executed here. It is often necessary on a vector machine to increase the total number of floating point operations to achieve vector rather than scalar processing. The MFLOP rates reported here are computed on the basis of the original number of floating point operations. The extra ones added to achieve vectorization are not included. [Pg.133]

FDDI Fiber distributed data interface an optical fiber based interconnect. FLOPS Floating point operations per second. [Pg.285]

GFLOPS Gigaflops one billion (10 ) floating point operations per second. [Pg.285]


See other pages where Floating Point Operation is mentioned: [Pg.61]    [Pg.89]    [Pg.94]    [Pg.21]    [Pg.127]    [Pg.14]    [Pg.44]    [Pg.46]    [Pg.48]    [Pg.55]    [Pg.116]    [Pg.116]    [Pg.339]    [Pg.12]    [Pg.13]    [Pg.9]    [Pg.12]    [Pg.13]    [Pg.502]    [Pg.311]    [Pg.494]    [Pg.24]    [Pg.131]    [Pg.194]    [Pg.260]    [Pg.480]    [Pg.214]    [Pg.234]    [Pg.248]   
See also in sourсe #XX -- [ Pg.46 , Pg.47 , Pg.113 , Pg.116 , Pg.339 ]

See also in sourсe #XX -- [ Pg.11 ]




SEARCH



Float

Floating

Floating point

Operating point

© 2024 chempedia.info