Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Floating point model

Arc-length continuation, steady states of a model premixed laminar flame, 410 Architecture, between parallel machines, 348 Arithmetic control processor, ST-100, 125 Arithmetic floating point operations,... [Pg.423]

While attempting to run this simulation in Micro-Cap, the following error was generated Floating point Pow (0,-1.1376) Domain Error. This was traced to the use of the SPICE-compatible VALUE statement in an E element. The value statement is used to model equations dependent on other nodes or currents. The statement in question used the form X" - Y. This was acceptable to IsSpice and PSpice, but not to Micro-Cap. This statement was rewritten in the equivalent form 1 / (X Y), which was accepted without error. [Pg.271]

Block companding. This method is also known under the name block floating point . A number of values, ordered either in time domain (successive samples) or in frequency domain (adjacent frequency lines) are normalized to a maximum absolute value. The normalization factor is called the scalefactor (or, in some cases, exponent). All values within one block are then quantized with a quantization step size selected according to the number of bits allocated for this block. A bit allocation algorithm is necessary to derive the number of bits allocated for each block from the perceptual model. In some cases, a simple bit allocation scheme without an explicit perceptual model (but still obeying masking rules) is used. [Pg.48]

In order to deal with roundoff errors due to the use of SP floating-point numbers on the GPU, Yasuda introduced a scheme in which the XC potential is approximated with a model potential // del which is chosen such that its matrix elements can be calculated analytically. This is done in DP on the CPU while the GPU is used for calculating the correction, that is, for the numerical quadrature of the matrix elements of Ai xc = Without the model potential, errors... [Pg.29]

As stated in [14], it takes only 7 floating point operations to simulate 1 ms of a theta model as compared to 1,200 for a conductance based model. This reduced complexity leads more easily to large scale simulations. Thus, it is interesting to simulate a network of 900 PNs and 300 LNs that corresponds to the entire locust AL at scale 1, as in [6] only the scale 1/10 was simulated. The simulations performed below will allow us to confirm that results obtained with the reduced size were valid. In our model, we have considered a probability of coimection of 0.05 for the total number of 300 LNs and 900 PNs. As mentioned above, we did not consider interconnections between PNs. The parameters for the input stimulus and for the theta neurons and the synapses are given in the appendix. The simulation of the model at scale 1 takes 20 minutes only on a Pentium 4 based PC running at 2.66 GHz. Note that the simulation is three times longer when interconnections between PNs are taken into account. [Pg.216]

Let us develop a performance model for the parallel matrix-vector multiplication. We first note that if the dimensions of A are n x n, the maximum number of processes that can be utilized in this parallel algorithm equals n. The total number of floating point operations required is rr (where we have counted a combined multiply and add as a single operation), and provided that the work is distributed evenly, which is a good approximation if n p, the computation time per process is... [Pg.83]

Modern processor architectures exploit the parallelism inside the instruction stream by executing independent instructions concurrently using multiple functional units. This independence relation can be computed from the program in the situation of pure floating-point arithmetic instructions considered in this paper, it can be inferred from the program text, and there is a trade-off between compiler- and hardware-measures to exploit it. As soon as data-dependencies across load-store instructions are considered, data- dependencies can only be computed at run-time. In a later paper, we show how to extend the simple model presented here to also cope with such dynamic dependencies, as well as speculative execution of instructions as resulting from branch-prediction. [Pg.30]

Thus, any real number belonging to the envelopment is a possible representative of the real number which the envelopment represents. The effect of envelopment is modelling the propagation of error in numerical calculation in floating point Vaccaro (2001). [Pg.327]

Differences between the modelling environment and the target computer (for example, in its handling of floating point arithmetic) may mean that the executable object code behaves differently from the model. This limits the credit that may be taken for simulations carried out in the modelling environment. [Pg.307]

The single-processor vector machine will have only one of the vector processors depicted, and the system may even have its scalar floating-point capability shared with the vector processor. The early vector processors indeed possessed only one VPU, while present-day models can house up to 64 feeding on the same shared memory. It may be noted that the VPU in Fig. 1 does not show a cache. The majority of vector processors do not employ a cache anymore. In many cases the vector unit cannot take advantage of it and the execution speed may even be unfavorably affected because of frequent cache overflow. [Pg.99]

Computer simulations can be considered to be part of the scientist s search for truth [20]. But while model systems are ostensibly tmth worthy, in as much as mathematics is a subject capable of true statements, in practice caution should be taken in suggesting that once implemented in code that is still the case. In coding up any mathematical model it is necessary to make approximations, whether this be the use of floating point numbers, or discarded terms in equations or approximation of functions. [Pg.76]


See other pages where Floating point model is mentioned: [Pg.76]    [Pg.76]    [Pg.749]    [Pg.486]    [Pg.25]    [Pg.12]    [Pg.12]    [Pg.10]    [Pg.311]    [Pg.494]    [Pg.260]    [Pg.263]    [Pg.168]    [Pg.234]    [Pg.209]    [Pg.497]    [Pg.149]    [Pg.119]    [Pg.276]    [Pg.186]    [Pg.244]    [Pg.81]    [Pg.83]    [Pg.137]    [Pg.159]    [Pg.280]    [Pg.385]    [Pg.280]    [Pg.41]    [Pg.286]    [Pg.720]    [Pg.724]    [Pg.390]    [Pg.325]    [Pg.121]    [Pg.782]   
See also in sourсe #XX -- [ Pg.76 ]




SEARCH



Float

Floating

Floating point

Point model

© 2024 chempedia.info