Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Vector computing

Vector Computers. Most computers considered supercomputers are vector-architecture computers. The concept of vector architecture has been a source of much confusion. [Pg.88]

On a vector computer having vector registers that hold 64 floating-point numbers, this loop would be processed 64 elements at a time. The first 64 elements of Y would be fetched from memory and stored in a vector register. Each iteration of the loop is independent of the previous iteration, so this loop can be fliUy pipelined, with successive iterations started every clock cycle. Once the pipeline is filled, the result, X, will be produced one element per clock cycle and will be stored in another vector register. The results in the vector register will then be stored back into main memory or used as input to a subsequent vector operation. [Pg.89]

There are vastly more complex examples of difficult vectorization decisions. A great deal of effort has been devoted to writing vector code, or code that compilers can safely translate into vector instmctions. As compilers become more sophisticated, their ability to recognize vectorization opportunities increases. The vendors of vector computers often claim that vectorization is automatic and that end users need not be concerned with it. This claim is usually tme only if the end user is similarly unconcerned with achieving maximum performance. More often than not, codes that have not been written for vector architecture machines must undergo substantial restmcturing in order to achieve significant performance enhancements on vector-architecture machines. [Pg.89]

Loops that could not be vectorized on conventional vector computers often performed very well under the Multiflow architecture and, unlike vector machines, for which a person could spend a great deal of time optimizing programs, substantially less could be done on the Multiflow, as most of the work fell to the compiler anyway. All the usual optimizations for memory utilization and cache usage also appHed to the Microflow. There were, of course, programs for which the compiler could not make good use of the multiple functional units, and the computer would mn at the speed of just one or two iadividually quite slow functional units. [Pg.94]

A.A. Toropov et al., Predicting water solubility and octanol water partition coefficient for carbon nanotubes based on the chiral vector. Comput. Biol. Chem. 31, 127-128 (2007)... [Pg.215]

The Integration of Chemical Rate Equations on a Vector Computer... [Pg.70]

With the advent of vector processors over the last ten years, the vector computer has become the most efficient and in some instances the only affordable way to solve certain computational problems. One such computer, the Texas Instruments Advanced Scientific Computer (ASC), has been used extensively at the Naval Research Laboratory to model atmospheric and combustion processes, dynamics of laser implosions, and other plasma physics problems. Furthermore, vectorization is achieved in these programs using standard Fortran. This paper will describe some of the hardware and software differences which distinguish the ASC from the more conventional scalar computer and review some of the fundamental principles behind vector program design. [Pg.70]

The intent of this was to start with a useful calculation, which could not be done using brute force techniques, and demonstrate the importance of optimizing the numerical implementation of a reactive flow model to run on a vector computer. As similar problems in combustion become more extensive and intricate, it behooves us to utilize computers in the most efficient manner possible. It is no longer feasible to continue to "ask the computer to do more and more work, without thought as to how a particular problem is to be implemented. The number of problems for which one would like to use a computer, as well as the complexity of these problems, is increasing at an astronomical rate. The other side of the coin, of course is that computers, and especially central processors (CPU s) are becoming cheaper. [Pg.93]

For a given input vector, compute the distance to each grid point, where distance is defined, for example, as the Euclidean distance (Equation 4.1). [Pg.62]

It may be proved that the Q and S vectors computed from Eqs. (5-162) and (5-163) always exactly satisfy the overall (scalar) radiant energy balance Im Q = lJ)-S. In words, the total radiant gas emission for all gas zones in the enclosure must always exactly equal the total radiant energy received at all surface zones which comprise the enclosure. In Eqs. (5-162) and (5-163), the following definitions are employed for the four forward-directed exchange areas... [Pg.36]

An analytical expression for the RD of two valence states was found. It is a special case of the algorithm for the RD of atomic couples (atomic vectors) computation [45]. [Pg.155]

Considering long-range MD, a variety of approximate methods have been developed to overcome the bottleneck that characterizes the forces treatment these include particle mesh algorithms, hierarchical methods, and fast multipole methods. One of the most promising developments is the cell multipole method, which scales linearly with N, requires only modest memory, and is well suited to highly parallel and vector computers. [Pg.276]

Vector Computing Use of multiple vector register hardware to exploit parallel computation of vector and array elements. [Pg.288]

S. Erode and R. Ahlrichs, Comput. Phys. Commun., 42, 51 (1986). An Optimised MD Program for the Vector Computer Cyber 205. [Pg.311]

Fig. 5c shows the distributions of force vectors computed from Eqs. 12 and 13 at various locations on the column holder. AU vectors confined in a plane perpendicular to the holder axis. As the holder rotates, both the direction and the net strength of the force vector fluemate in such a way that the vector becomes longest at the point remote from the centrifuge axis and shortest at the point close to the central axis of the centrifuge. In most locations, the vectors are directed outwardly from the circle except for jS < 0.25, where its direction is reversed... [Pg.406]

In this scctioin we describe typical computer requirements for the 0( D) + Ho reaction. The TB, TJ and TK codes use about 2 GBytes of core memory. They have been implemented on a NEC-SXo vector computer (8 Gflops peak performance per processor) at IDRJS/CNRS (Orsay, France). All the codes are efficiently vectorized and use the optimized BLAS and LAPACK linear algebra libraries. [Pg.194]

Figure 1. Distribution of norms of the error vectors computed by the finite difference formula [Eq. (13)] from exact trajectories of valine dipeptide. The dipeptide was initially equilibrated at 300 K. The largest errors are significant and are of the same order of magnitude as the forces. Figure 1. Distribution of norms of the error vectors computed by the finite difference formula [Eq. (13)] from exact trajectories of valine dipeptide. The dipeptide was initially equilibrated at 300 K. The largest errors are significant and are of the same order of magnitude as the forces.
The progress in the calculation of highly correlated electronic wavefunc-tions is due both to the development of improved computational methods and to the rapidly increasing computing power available. In particular, the advent of vector computers has made it possible to perform much larger calculations than before in shorter times. In order to use such machines efficiently, it is essential to adjust the methods to the hardware available. Generally important is to remove all logic from the innermost loops and to perform as many simple vector or matrix operations as possible. [Pg.2]

The advantages are even greater if molecular symmetry can be employed, since only those blocks (G ) are needed in which (r,i) and (s,j) correspond to orbitals of the same symmetry. The evaluation of the operators G is particularly efficient on vector computers, because it can be performed in terms of matrix multiplications with long vector lengths. In this case the elements G , P and (fixed ij) form vectors, and the operators Jj and form supermatrices. [Pg.20]

An obvious disadvantage of the procedure outlined above is that a relatively large amount of memory is needed. It should also be noted that the method is most advantageous for complete reference functions. For simple reference wavefunctions the matrices Aij, C, etc., become very sparse. This sparsity cannot be exploited fully in a vectorized computer code. It may, therefore, be more efficient to use other techniques in such cases. [Pg.57]

Before the details of the implementation of MCSCF methods are discussed, it is useful to introduce a few general computer programming concepts. Modern computers may be classified in several ways depending on size, cost, capabilities, or architecture. One such classification divides computers into scalar and vector machines. Scalar computers (e.g. the VAX 11/780) perform primitive arithmetic operations such as additions and multiplications on pairs of arguments. Vector computers (e.g. the CRAY X-MP and CYBER-205) have, in addition to scalar operations, vector instructions ... [Pg.169]


See other pages where Vector computing is mentioned: [Pg.341]    [Pg.1049]    [Pg.90]    [Pg.90]    [Pg.91]    [Pg.97]    [Pg.11]    [Pg.140]    [Pg.51]    [Pg.101]    [Pg.119]    [Pg.282]    [Pg.286]    [Pg.289]    [Pg.123]    [Pg.63]    [Pg.85]    [Pg.283]    [Pg.231]    [Pg.248]    [Pg.268]    [Pg.28]    [Pg.197]    [Pg.38]    [Pg.51]    [Pg.170]    [Pg.171]   
See also in sourсe #XX -- [ Pg.288 ]




SEARCH



Vector computer

© 2024 chempedia.info