Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Arithmetic floating point operations

Arc-length continuation, steady states of a model premixed laminar flame, 410 Architecture, between parallel machines, 348 Arithmetic control processor, ST-100, 125 Arithmetic floating point operations,... [Pg.423]

As the dimension of the blocks of the Hessian matrix increases, it becomes more efficient to solve for the wavefunction corrections using iterative methods instead of direct methods. The most useful of these methods require a series of matrix-vector products. Since a square matrix-vector product may be computed in 2N arithmetic operations (where N is the matrix dimension), an iterative solution that requires only a few of these products is more efficient than a direct solution (which requires approximately floating-point operations). The most stable of these methods expand the solution vector in a subspace of trial vectors. During each iteration of this procedure, the dimension of this subspace is increased until some measure of the error indicates that sufficient accuracy has been achieved. Such iterative methods for both linear equations and matrix eigenvalue equations have been discussed in the literature . [Pg.185]

A newer measure of an algorithm s theoretical performance is its Mop-Cost which is defined exactly as the Flop-cost except that Memory Operations (Mops) are counted instead of Floating-Point Operations (Flops). A Mop is a load from, or a store to, fast memory. There are sound theoretical reasons why Mops should be a better indicator of practical performance than Flops, especially on recent computers employing vector or RISC architectures, and this has been discussed in detail by Frisch et al. [62] to cut a long story short, the Mops measure is useful because, on modern computers and in contrast to older ones, memory traffic generally presents a tighter bottleneck than floating-point arithmetic. [Pg.151]

Floating Point. Integrated floating point units first arrived as separate coprocessors under the direct control of the microprocessor. However, these processors performed arithmetic with numerous sequential operations, resulting in performance too slow for real-time signal processing. [Pg.127]

Peripheral processors which are capable of performing floating point arithmetic operations at high speed are used to enhance the poor performance of popular general purpose minicomputers in this area. These devices are described in various ways but the following nomenclature will be used in this paper. [Pg.194]

The Intel 8086 or 8088 microprocessors could be used in conjunction with the Intel 8087 floating point processor chip (4) which is probably twice as fast as the Am9511A for on-chip operations and includes extended precision arithmetic in its instruction set. Unfortunately the 8087 was only laid down on paper, not silicon, when this work started. The 8087 is now (January 1981) available in sample quantities at a price far in excess of the Am9511A. In addition to the price and availability problem the instruction set of the 8087 is less suited to chemical computations than the Am9511A in that many transcen-... [Pg.196]

Clearly, the future of IT had for Jane already manifested itself in many more ways than the PC on your desk. Via the increasingly powerful Internet you will be able to interact with very powerful supercomputers as if they were your own. Remember that, at the time of tidying up this book, IBM hardware broke the petaflop barrier, at 1015 floating point (arithmetic) operations per second. Many distinctions between local and remote computing will vanish. The PC is... [Pg.479]

Hardware floating-point processor Performs with very high speed floating-point arithmetic operations and expands tremendously the computational speed of the machine. [Pg.287]

Burks, Goldstine, and von Neumann first identified the principal components of the general-purpose computer as the arithmetic, memory, control, and input-output organs, and then proceeded to formulate the structure and essential characteristics of each unit for the IAS machine. Alternatives were considered and the rationale behind the choice selected presented. Adoption of the binary, rather than decimal, number system was justified by its simplicity and speed in elementary arithmetic operations, its applicability to logical instructions, and the inherent binary nature of electronie components. Built-in floating-point hardware was ruled out, for the prototype at least, as a waste of the critical memory resource, and because of the increased complexity of the circuitry consideration was given to software implementation of such a facility. [Pg.274]

C has a cavalier attitude toward operations involving different numeric types. It allows you to perform mixed operations involving any of the numeric types, such as adding a character to a floating-point value. There is a standard set of rules, called the usual arithmetic conversions, that specifies how operations will be performed when the operands are of different types. Without going into detail, the usual arithmetic conversions typically direct that when two operands have a different precision, the less precise operand is promoted to match the more precise operand, and signed types are (when necessary) converted to unsigned. [Pg.20]

Almost all operations in MATLAB are performed in double-precision arithmetic conforming to IEEE Standard 754 (double precision calls for 52 mantissa bits). This represents the highest degree of resolution by which MATLAB can see two very close numbers as two different entities. The following examples illustrate the concept of floating point-related computational problems. [Pg.87]

Consequently, more often than not, the results of an arithmetic operation cannot be represented exactly as a valid floating-point number. Suppose the correct answer falls between adjacent normalized floating-point numbers. The computer must round to one of the numbers, and this produces round off error. Such types of errors can produce serious problems in algorithms that involve a large number of repetitive calculations. ... [Pg.88]


See other pages where Arithmetic floating point operations is mentioned: [Pg.127]    [Pg.127]    [Pg.61]    [Pg.9]    [Pg.30]    [Pg.276]    [Pg.278]    [Pg.206]    [Pg.86]    [Pg.194]    [Pg.195]    [Pg.203]    [Pg.209]    [Pg.213]    [Pg.234]    [Pg.263]    [Pg.172]    [Pg.128]    [Pg.277]    [Pg.280]    [Pg.295]    [Pg.181]    [Pg.29]    [Pg.325]    [Pg.1409]    [Pg.100]    [Pg.146]    [Pg.86]    [Pg.155]    [Pg.192]   
See also in sourсe #XX -- [ Pg.127 ]




SEARCH



Arithmetic

Arithmetic operations

Arithmetic operator

Float

Floating

Floating point

Floating point arithmetic

Operating point

© 2024 chempedia.info