Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Floating point arithmetic

IEEE Computer Society Standards Committee, 1985] IEEE Computer Society Standards Committee (1985). IEEE standard for binary floating-point arithmetic. Institute of Electrical and Electronics Engineers. [Pg.547]

Sohie and Kloker, 1988] Sohie, G. R. and Kloker, K. L. (1988). A Digital Signal Processor with IEEE Floating-Point Arithmetic. IEEE Micro, 8(6) 49-67. [Pg.563]

Peripheral processors which are capable of performing floating point arithmetic operations at high speed are used to enhance the poor performance of popular general purpose minicomputers in this area. These devices are described in various ways but the following nomenclature will be used in this paper. [Pg.194]

Either of the remaining two options to be discussed would provide at least an order of magnitude increase in speed on the Am9511A and are much coveted by the author, but unfortunately they would require financial resources on a very large scale for development into useable floating point arithmetic processors. [Pg.202]

It has been the author s experience that most chemical computations, structural or otherwise, which involve a significant amount of floating point arithmetic are reducible to vector format. The exercise is usually worthwhile as in most cases the chemist is using the same program over and over again with different data sets. [Pg.224]

The kernel is applied first as if it were the mask, by either push or pull, and then the resulting polygon is smoothed out iteratively. It is arguable that this is likely to be the least susceptible to inaccuracies from floating point arithmetic. [Pg.168]

A newer measure of an algorithm s theoretical performance is its Mop-Cost which is defined exactly as the Flop-cost except that Memory Operations (Mops) are counted instead of Floating-Point Operations (Flops). A Mop is a load from, or a store to, fast memory. There are sound theoretical reasons why Mops should be a better indicator of practical performance than Flops, especially on recent computers employing vector or RISC architectures, and this has been discussed in detail by Frisch et al. [62] to cut a long story short, the Mops measure is useful because, on modern computers and in contrast to older ones, memory traffic generally presents a tighter bottleneck than floating-point arithmetic. [Pg.151]

Clearly, the future of IT had for Jane already manifested itself in many more ways than the PC on your desk. Via the increasingly powerful Internet you will be able to interact with very powerful supercomputers as if they were your own. Remember that, at the time of tidying up this book, IBM hardware broke the petaflop barrier, at 1015 floating point (arithmetic) operations per second. Many distinctions between local and remote computing will vanish. The PC is... [Pg.479]

Hardware floating-point processor Performs with very high speed floating-point arithmetic operations and expands tremendously the computational speed of the machine. [Pg.287]

Modern processor architectures exploit the parallelism inside the instruction stream by executing independent instructions concurrently using multiple functional units. This independence relation can be computed from the program in the situation of pure floating-point arithmetic instructions considered in this paper, it can be inferred from the program text, and there is a trade-off between compiler- and hardware-measures to exploit it. As soon as data-dependencies across load-store instructions are considered, data- dependencies can only be computed at run-time. In a later paper, we show how to extend the simple model presented here to also cope with such dynamic dependencies, as well as speculative execution of instructions as resulting from branch-prediction. [Pg.30]

Some applications require floating-point arithmetic (which is not supported by the emulator board and module library). [Pg.173]

Overton, Michael L. Numerical Computing with TFF.F. Floating Point Arithmetic. Philadelphia Society for Industrial and Applied Mathematics, 2001. A complete discussion of the standard representation of the real numbers as used on a computer. [Pg.1316]

With floating-point arithmetic it is necessary to quantize after both multiplications and additions. The addition quantization arises because, prior to addition, the mantissa of the smaller number in the sum is shifted right until the exponent of both numbers is the same. In general, this gives a sum mantissa that is too long and so must be quantized. [Pg.825]

We will assume that quantization in floating-point arithmetic is performed by rounding. Because of the exponent in floating-point arithmetic, it is the relative error that is important The relative error is defined as... [Pg.825]

Limit cycles are primarily of concern in fixed-point recursive filters. As long as floating-point filters are realized as the parallel or cascade connection of first- and second-order subfilters, limit cycles will generally not be a problem since limit cycles are practically not observable in first- and second-order systems implemented with 32-bit floating-point arithmetic (Bauer, 1993). It has been shown that such systems must have an extremely small margin of stability for limit cycles to exist at anything other than underflow levels, which are at an amplitude of less than 10 (Bauer, 1993). [Pg.828]

An overflow oscillation, sometimes also referred to as an adder overflow limit cycle, is a high-level oscillation that can exist in an otherwise stable flxed-point filter due to the gross nonlinearity associated with the overflow of internal Alter calculations (Ebert, Mazo, and Taylor, 1969). Like limit cycles, overflow oscillations require recursion to exist and do not occur in nonrecursive FIR filters. Overflow oscillations also do not occur with floating-point arithmetic due to the virtual impossibility of overflow. [Pg.828]

Differences between the modelling environment and the target computer (for example, in its handling of floating point arithmetic) may mean that the executable object code behaves differently from the model. This limits the credit that may be taken for simulations carried out in the modelling environment. [Pg.307]

A signal processor and a floating point arithmetic example. DAA... [Pg.38]


See other pages where Floating point arithmetic is mentioned: [Pg.265]    [Pg.45]    [Pg.241]    [Pg.127]    [Pg.113]    [Pg.20]    [Pg.43]    [Pg.194]    [Pg.195]    [Pg.195]    [Pg.202]    [Pg.203]    [Pg.209]    [Pg.234]    [Pg.263]    [Pg.168]    [Pg.476]    [Pg.264]    [Pg.146]    [Pg.276]    [Pg.295]    [Pg.304]    [Pg.29]    [Pg.174]    [Pg.9]    [Pg.325]    [Pg.1409]    [Pg.830]    [Pg.830]    [Pg.447]    [Pg.146]   
See also in sourсe #XX -- [ Pg.12 , Pg.193 ]

See also in sourсe #XX -- [ Pg.1409 ]




SEARCH



Arithmetic

Arithmetic double-precision floating point

Arithmetic floating point operations

Float

Floating

Floating point

© 2024 chempedia.info