Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Logarithms, errors

The error metric is a measure of a model s prediction accuracy. The software provides a number of error metrics such as squared error, worst-case error, logarithm error, median error, interquartile absolute error and signed difference for minimization. Additionally, options to maximize the correlation coefficient or the B goodness of fit or experimental hybrid that considers both absolute error and correlation are also available. Data sphtting is an important step which divides the data into a training set to generate solutions and a test set to check the accuracy of those solutions (Fig. 3.62). [Pg.187]

As can be seen in Figure 5-17, some search fields (e.g., POW [= Power]) do not need any input in the search mask this means that all entries with any content of those Helds are retrieved. However, other fields always demand an input. In case the input is omitted (for example for the decadic logarithm of the partition coefficient), a corresponding error message results. Since the PCB are more soluble in the organic phase, the input of that Field is restricted to positive values. [Pg.251]

For a sequenee of reaetion steps two more eoneepts will be used in kinetics, besides the previous rules for single reaetions. One is the steady-state approximation and the seeond is the rate limiting step eoneept. These two are in strict sense incompatible, yet assumption of both causes little error. Both were explained on Figure 6.1.1 Boudart (1968) credits Kenzi Tamaru with the graphical representation of reaction sequences. Here this will be used quantitatively on a logarithmic scale. [Pg.123]

The fractional error is logarithmically unbiased that is, an M which is k times produces the same magnitude fractional error (but of opposite sign) as an M which is l/k times E. [Pg.333]

A problem with the overall approach for liquid mixtures is that suitable averages must be used when calculating B, although errors in B are partly offset by the logarithmic term in Eq. (1). It is also necessary to decide at what temperature the properties of air should be evaluated. In [66] it was suggested that Cp a and should be evaluated at the arithmetic mean of the... [Pg.210]

Figure 56. Logarithmic plots of the PRESS values as a function of the number of factors (rank) using the same samples for calibration and validation. As factors are added, the errors continue to decrease. When all of the factors are used, the errors equal exactly zero. Figure 56. Logarithmic plots of the PRESS values as a function of the number of factors (rank) using the same samples for calibration and validation. As factors are added, the errors continue to decrease. When all of the factors are used, the errors equal exactly zero.
A common error is to confuse the GPC distribution with the weight distribution. The response of a refractive index detector is proportional to the mass of polymer. The GPC elution volume (V) typically scales according to the logarithm of the degree of polymerization (or the logarithm of the molecular... [Pg.241]

An application of Eq. (19) is shown in Fig. 4, which gives the solubility of solid naphthalene in compressed ethylene at three temperatures slightly above the critical temperature of ethylene. The curves were calculated from the equilibrium relation given in Eq. (12). Also shown are the experimental solubility data of Diepen and Scheffer (D4, D5) and calculated results based on the ideal-gas assumption (ordinate scale is logarithmic and it is evident that very large errors are incurred when corrections for gas-phase nonideality are neglected. [Pg.151]

The logarithmic response of ISEs can cause major accuracy problems. Very small uncertainties in the measured cell potential can cause large errors. (Recall that an... [Pg.145]

A note on good practice Exponential functions (inverse logarithms, e ) are very sensitive to the value of x, so carry out all the arithmetic in one step to avoid rounding errors. [Pg.487]

Error bars defined by the confidence limits CL(y,) will shrink or expand, most likely in an asymmetric manner. Since we here presuppose near absence of error from the abscissa values, this point applies only to y-transformations. A numerical example is 17 1 ( 5.9%, symmetric CL), upon logarithmic transformation becomes 1.23045 -0.02633. .. 1.23045 + 0.02482. [Pg.129]

The data are also represented in Fig. 39.5a and have been replotted semi-logarithmically in Fig. 39.5b. Least squares linear regression of log Cp with respect to time t has been performed on the first nine data points. The last three points have been discarded as the corresponding concentration values are assumed to be close to the quantitation limit of the detection system and, hence, are endowed with a large relative error. We obtained the values of 1.701 and 0.005117 for the intercept log B and slope Sp, respectively. From these we derive the following pharmacokinetic quantities ... [Pg.460]

Almost any transformation of data is changing the weight of inherent features that we want to know about. A striking example is the simple logarithmic representation of scattering data and the related distortion of the error-bar spread (p. 124, Fig. 8.11). As we are interested in structure, we should fit data that present an undistorted view of structure. Our X-ray instrument has already transformed structure information into a scattering pattern, and we have to ask what we should do with the pattern before fitting - leave it as it is or transform it back ... [Pg.230]

The point being that, as our conclusions indicate, this is one case where the use of latent variables is not the best approach. The fact remains that with data such as this, one wavelength can model the constituent concentration exactly, with zero error - precisely because it can avoid the regions of nonlinearity, which the PCA/PLS methods cannot do. It is not possible to model the constituent better than that, and even if PLS could model it just as well (a point we are not yet convinced of since it has not yet been tried -it should work for a polynomial nonlinearity but this nonlinearity is logarithmic) with one or even two factors, you still wind up with a more complicated model, something that there is no benefit to. [Pg.153]

Certainly, nonlinearities in real data can have several possible causes, both chemical (e.g., interactions that make the true concentrations of any given species different than expected or might be calculated solely from what was introduced into a sample, and interaction can change the underlying absorbance bands, to boot) and physical (such as the stray light, that we simulated). Approximating these nonlinearities with a Taylor expansion is a risky procedure unless you know a priori what the error bound of the approximation is, but in any case it remains an approximation, not an exact solution. In the case of our simulated data, the nonlinearity was logarithmic, thus even a second-order Taylor expansion would be of limited accuracy. [Pg.155]

From this, however, it should not be concluded that the statistical model of the atom is a very good one. As Fano (1963) has pointed out, I appears only as a logarithm and an error Si in the computation of I shows up as a relative error in the stopping power as (l/5)<5l II. Besides, it is an average quantity and can be approximated reasonably well without knowing the details of the distribution. [Pg.19]

The second considered example is described by the monostable potential of the fourth order (x) = ax4/4. In this nonlinear case the applicability of exponential approximation significantly depends on the location of initial distribution and the noise intensity. Nevertheless, the exponential approximation of time evolution of the mean gives qualitatively correct results and may be used as first estimation in wide range of noise intensity (see Fig. 14, a = 1). Moreover, if we will increase noise intensity further, we will see that the error of our approximation decreases and for kT = 50 we obtain that the exponential approximation and the results of computer simulation coincide (see Fig. 15, plotted in the logarithmic scale, a = 1, xo = 3). From this plot we can conclude that the nonlinear system is linearized by a strong noise, an effect which is qualitatively obvious but which should be investigated further by the analysis of variance and higher cumulants. [Pg.421]

The usual approach is to compile data for the property in question for a series of structurally similar molecules and plot the logarithm of this property versus molecular descriptors, on a trial-and-error basis seeking the descriptor which best characterizes the variation in the property. It may be appropriate to use a training set to obtain a relationship and test this relationship on another set. Generally a set of at least ten data points is necessary before a reliable QSPR can be developed. [Pg.15]

Often the expected change in 210Pb concentration with depth (obtained from the logarithmic plot of unsupported activity as a function of the overlying mass of dry sediment accumulated) shows variations which are outside the analytical errors expected from the measurement of radioactive decay. Possible reasons for these discrepancies are ... [Pg.332]


See other pages where Logarithms, errors is mentioned: [Pg.134]    [Pg.98]    [Pg.134]    [Pg.98]    [Pg.98]    [Pg.44]    [Pg.222]    [Pg.176]    [Pg.165]    [Pg.551]    [Pg.17]    [Pg.17]    [Pg.374]    [Pg.11]    [Pg.287]    [Pg.83]    [Pg.147]    [Pg.386]    [Pg.136]    [Pg.158]    [Pg.135]    [Pg.310]    [Pg.399]    [Pg.285]    [Pg.359]    [Pg.11]    [Pg.63]    [Pg.37]    [Pg.81]    [Pg.112]    [Pg.240]    [Pg.357]   
See also in sourсe #XX -- [ Pg.12 ]




SEARCH



Logarithms

© 2024 chempedia.info