Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Vector computing architectures

Given this sinprising behavior, the computation time depends on how efficiently the individual steps can be executed. This is a very active area of research and one that is exploring new parallel and vector computer architectures. (As with the simplex method, if the A matrix were completely dense, the computation time would increase with the cube of the number of constraints, but sparse matrix techniques make this rather meaningless.)... [Pg.2534]

Vector Computers. Most computers considered supercomputers are vector-architecture computers. The concept of vector architecture has been a source of much confusion. [Pg.88]

There are vastly more complex examples of difficult vectorization decisions. A great deal of effort has been devoted to writing vector code, or code that compilers can safely translate into vector instmctions. As compilers become more sophisticated, their ability to recognize vectorization opportunities increases. The vendors of vector computers often claim that vectorization is automatic and that end users need not be concerned with it. This claim is usually tme only if the end user is similarly unconcerned with achieving maximum performance. More often than not, codes that have not been written for vector architecture machines must undergo substantial restmcturing in order to achieve significant performance enhancements on vector-architecture machines. [Pg.89]

Loops that could not be vectorized on conventional vector computers often performed very well under the Multiflow architecture and, unlike vector machines, for which a person could spend a great deal of time optimizing programs, substantially less could be done on the Multiflow, as most of the work fell to the compiler anyway. All the usual optimizations for memory utilization and cache usage also appHed to the Microflow. There were, of course, programs for which the compiler could not make good use of the multiple functional units, and the computer would mn at the speed of just one or two iadividually quite slow functional units. [Pg.94]

Section II discusses the real wave packet propagation method we have found useful for the description of several three- and four-atom problems. As with many other wave packet or time-dependent quantum mechanical methods, as well as iterative diagonalization procedures for time-independent problems, repeated actions of a Hamiltonian matrix on a vector represent the major computational bottleneck of the method. Section III discusses relevant issues concerning the efficient numerical representation of the wave packet and the action of the Hamiltonian matrix on a vector in four-atom dynamics problems. Similar considerations apply to problems with fewer or more atoms. Problems involving four or more atoms can be computationally very taxing. Modern (parallel) computer architectures can be exploited to reduce the physical time to solution and Section IV discusses some parallel algorithms we have developed. Section V presents our concluding remarks. [Pg.2]

Shared-memory parallel processing was certainly more successful for QC in earlier applications and continues to play a significant role in high performance computational chemistry. A coarse-grained parallel implementation scheme for the direct SCF method by Liithi et al. allowed for a near-asymptotic speed-up involving a very low parallelization overhead without compromising the vector performance of vector-parallel architectures. [Pg.247]

Before the details of the implementation of MCSCF methods are discussed, it is useful to introduce a few general computer programming concepts. Modern computers may be classified in several ways depending on size, cost, capabilities, or architecture. One such classification divides computers into scalar and vector machines. Scalar computers (e.g. the VAX 11/780) perform primitive arithmetic operations such as additions and multiplications on pairs of arguments. Vector computers (e.g. the CRAY X-MP and CYBER-205) have, in addition to scalar operations, vector instructions ... [Pg.169]

One of the specihc types of solutions for ab initio electronic structure was direct methods wherein intermediate quantities (two-electron integrals) normally stored on disk were recomputed when needed [7]. Binkley s report went on to say that the effort to adapt to the special features of vector and parallel architectures led to the production of better scalar algorithms. In other words, the basic ideas behind algorithms were influenced by the technology, in this case, computer architecture, and this is really a very significant and constant theme in the evolution of theoretical and computational chemistry. [Pg.6]

The direct Fourier transforms A v) and B(v) should be calculated by a Fourier transform routine that can be found in many scientific program libraries. Then the Fourier transform of the correlation function is calculated using (71). In the case where the autocorrelation function is calculated (A = B), this Fourier transform represents a frequency spectrum of the system associated with the property A and can be related to experimental data, as discussed above. The correlation function in the time domain is then obtained by an inverse Fourier transformation. Fast Fourier transformation routines optimized for particular computer architectures are usually provided by computer manufacturers, especially for the parallel or vector multiprocessor systems. [Pg.51]

The book shows how intense has been in recent years the work for designing parallel and vector algorithms. Accurate electronic structure of reactive systems as well as exact and high level approximate three-dimensional calculations of the reactive dynamics, efficient directive and declaratory software for modeling complex systems. In turn, new and more complex problems have been posed by these advances. Some of them are concerned with the definition of the computer architecture better suited for chemical calculations. Others are concerned with balancing within the application vector and parallel structures. [Pg.1]

Using ( i,til),...,( p,tip) as training data, a regression model P, is constructed. In our case, P is the function from the space of i-dimensional vectors to the space of o-dimensional vectors computed by an MLP with given architecture that was trained with... [Pg.140]

The SIMD category comprises both vector computers, such as the CRAY-1, and so-called processor arrays, such as the Thinking Machines Connection Machine and the classic ILLIAC-IV computer. - SIMD computers ate well suited for applications requiring identical operations on uniform data structures, but SIMD computers cannot necessarily be used efficiently for general-purpose calculations. Presently, MIMD is the most common parallel architecture, and the majority of parallel computers used for quantum chemistry applications are of the MIMD type. Examples of MIMD computers are the... [Pg.1991]

With the powerful quantum mechanical computational methods that are presently available, it is possible to obtain accurate at initio electronic wavefunctions for modest to large size systems. The availability of powerful computers, especially workstations, and vector and massively parallel supercomputers, makes it possible to take extensive advantage of these computational methods. The recent advances in both computer architecture and numerical methods have been truly breathtaking and we are now able to obtain reasonably accurate solutions for isolated molecules as well as for cluster models of condensed matter. In this review, our concern is for the use of cluster models to describe chemistry at solid surfaces in particular, the chemical bond formed between adsorbates and solid substrates. [Pg.2870]

Supercomputers from vendors such as Cray, NEC, and Eujitsu typically consist of between one and eight processors in a shared memory architecture. Peak vector speeds of over 1 GELOP (1000 MELOPS) per processor are now available. Main memories of 1 gigabyte (1000 megabytes) and more are also available. If multiple processors can be tied together to simultaneously work on one problem, substantially greater peak speeds are available. This situation will be further examined in the section on parallel computers. [Pg.91]

The most commercially successful of these systems has been the Convex series of computers. Ironically, these are traditional vector machines, with one to four processors and shared memory. Their Craylike characteristics were always a strong selling point. Interestingly, SCS, which marketed a minisupercomputer that was fully binary compatible with Cray, went out of business. Marketing appears to have played as much a role here as the inherent merits of the underlying architecture. [Pg.94]

Mathematical models require computation to secure concrete predictions. Successes in relatively simple cases spurs interest in more complex situations. Somewhat specialized computer hardware and software have emerged in response to these demands. Examples are the high-end processors with vector architecture, such as the Cray series, the CDC Cyber 205, and the recently announced IBM 3090 with vector attachment. When a computation can effectively utilize vector architecture, such machines will out-perform even the most powerful conventional scalar machine by a substantial margin. Such performance has given rise to the term supercomputer. ... [Pg.237]

From a computational point of view, the forms of the Jacobian entries above are welcome because they conform to the architectural requirements of vector, parallel, and vector-parallel computers (Fig. 4.3)... [Pg.62]

A newer measure of an algorithm s theoretical performance is its Mop-Cost which is defined exactly as the Flop-cost except that Memory Operations (Mops) are counted instead of Floating-Point Operations (Flops). A Mop is a load from, or a store to, fast memory. There are sound theoretical reasons why Mops should be a better indicator of practical performance than Flops, especially on recent computers employing vector or RISC architectures, and this has been discussed in detail by Frisch et al. [62] to cut a long story short, the Mops measure is useful because, on modern computers and in contrast to older ones, memory traffic generally presents a tighter bottleneck than floating-point arithmetic. [Pg.151]


See other pages where Vector computing architectures is mentioned: [Pg.187]    [Pg.6]    [Pg.187]    [Pg.6]    [Pg.97]    [Pg.9]    [Pg.8]    [Pg.63]    [Pg.588]    [Pg.219]    [Pg.365]    [Pg.444]    [Pg.746]    [Pg.330]    [Pg.251]    [Pg.565]    [Pg.153]    [Pg.68]    [Pg.2365]    [Pg.94]    [Pg.95]    [Pg.96]    [Pg.98]    [Pg.238]    [Pg.92]    [Pg.261]    [Pg.118]    [Pg.239]    [Pg.137]    [Pg.289]    [Pg.2365]    [Pg.430]    [Pg.492]   
See also in sourсe #XX -- [ Pg.5 ]




SEARCH



Architecture computer

Vector computer

© 2024 chempedia.info