Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Quadratic error

As the exact response equations appearing in the above expression are identically zero one may conclude that the property has errors quadratic in the errors of the response, thus higher precision. [Pg.345]

More accurately, as the inverse problem process computes a quadratic error with every point of a local area around a flaw, we shall limit the sensor surface so that the quadratic error induced by the integration lets us separate two close flaws and remains negligible in comparison with other noises or errors. An inevitable noise is the electronic noise due to the coil resistance, that we can estimate from geometrical and physical properties of the sensor. Here are the main conclusions ... [Pg.358]

We try to estimate the function H(u), noted H, by minimization of the quadratic residual error... [Pg.747]

Figure B3.2.11. Total energy versus lattice constant of gallium arsenide from a VMC calculation including 256 valence electrons [118] the curve is a quadratic fit. The error bars reflect the uncertainties of individual values. The experimental lattice constant is 10.68 au, the QMC result is 10.69 (+ 0.1) an (Figure by Professor W Schattke). Figure B3.2.11. Total energy versus lattice constant of gallium arsenide from a VMC calculation including 256 valence electrons [118] the curve is a quadratic fit. The error bars reflect the uncertainties of individual values. The experimental lattice constant is 10.68 au, the QMC result is 10.69 (+ 0.1) an (Figure by Professor W Schattke).
Abstract. A smooth empirical potential is constructed for use in off-lattice protein folding studies. Our potential is a function of the amino acid labels and of the distances between the Ca atoms of a protein. The potential is a sum of smooth surface potential terms that model solvent interactions and of pair potentials that are functions of a distance, with a smooth cutoff at 12 Angstrom. Techniques include the use of a fully automatic and reliable estimator for smooth densities, of cluster analysis to group together amino acid pairs with similar distance distributions, and of quadratic progrmnming to find appropriate weights with which the various terms enter the total potential. For nine small test proteins, the new potential has local minima within 1.3-4.7A of the PDB geometry, with one exception that has an error of S.SA. [Pg.212]

The performance index for MPC applications is usually a linear or quadratic function of the predic ted errors and calculated future control moves. For example, the following quadratic performance index has been widely used ... [Pg.740]

Fig. 12. Interface width a as a function of annealing time x during initial stages of interdiffusion of PS(D)/PS(H) [95]. Data points are obtained by a fit with error function profiles of neutron reflectivity curves as shown in Fig. 11. Different symbols correspond to different samples. The interface width a0 prior to annealing is also indicated (T) and is subtracted quadratically from the data (a = [ Fig. 12. Interface width a as a function of annealing time x during initial stages of interdiffusion of PS(D)/PS(H) [95]. Data points are obtained by a fit with error function profiles of neutron reflectivity curves as shown in Fig. 11. Different symbols correspond to different samples. The interface width a0 prior to annealing is also indicated (T) and is subtracted quadratically from the data (a = [<r2lp — al]111)...
Thus the quadratic sum of all the Zemike coefficients gives the rms value of the entire wave-front error by ... [Pg.43]

The expected value of the quadratic error with respect to the true object brightness distribution must be as small as possible ... [Pg.402]

In order to derive the expression of the filter, we write the expected value e of the quadratic error ... [Pg.402]

We quote below the results of computations for problem (3) with j/q = 1 and j/j =82, where is the smallest root to the quadratic equation (4). Once supplemented with those initial conditions, the exact solution of problem (3) takes the form j/, = i s (A = 0). Because of rounding errors, the first summand emerged in formula (5). This member increases along with increasing i, thus causing abnormal termination in computational procedures. [Pg.89]

More specifically, the reduced variable Kj = kjA is defined on [0, ti]. Generally, the error in an FD approximation (or rather its dispersion relation, Eq. (36)) increases with Kj. The Taylor series approach outlined above, which leads to the standard Lagrangian FD approximations, is essentially perfect for very small Kj but quickly deviates from the correct, quadratic dependence, Eq. (37). The generic behavior is that the error increases monotonically with Kj. Instead of requiring that the fit be perfect in the limit of very small Kj, we require that the error be no greater than s from Kj = 0 up to some... [Pg.15]

Figure 3 gives two examples of L and L closeness of two functions. The L closeness leaves open the possibility that in a small region of the input space (with, therefore, small contribution to the overall error) the two functions can be considerably different. This is not the case for L closeness, which guarantees some minimal proximity of the two functions. Such a proximity is important when, as in this case, one of the functions is used to predict the behavior of the other, and the accuracy of the prediction has to be established on a pointwise basis. In these cases, the L error criterion (4) and its equivalent [Eq. (6)] are superior. In fact, L closeness is a much stricter requirement than L closeness. It should be noted that whereas the minimization of Eq. (3) is a quadratic problem and is guaranteed to have a unique solution, by minimizing the IT expected risk [Eq. (4)], one may yield many solutions with the same minimum error. With respect to their predictive accuracy, however, all these solutions are equivalent and, in addition, we have already retreated from the requirement to find the one and only real function. Therefore, the multiplicity of the best solutions is not a problem. [Pg.179]

It can be shown [4] that the innovations of a correct filter model applied on data with Gaussian noise follows a Gaussian distribution with a mean value equal to zero and a standard deviation equal to the experimental error. A model error means that the design vector h in the measurement equation is not adequate. If, for instance, in the calibration example the model was quadratic, should be [1 c(j) c(j) ] instead of [1 c(j)]. In the MCA example h (/) is wrong if the absorptivities of some absorbing species are not included. Any error in the design vector appears by a non-zero mean for the innovation [4]. One also expects the sequence of the innovation to be random and uncorrelated. This can be checked by an investigation of the autocorrelation function (see Section 20.3) of the innovation. [Pg.599]

By automation one can remove the variation of the analysis time or shorten the analysis time. Although the variation of the analysis time causes half of the delay, a reduction of the analysis time is more important. This is also true if, by reducing the analysis time, the utilization factor would remain the same (and thus q) because more samples are submitted. Since p = AT / lAT, any measure to shorten the analysis time will have a quadratic effect on the absolute delay (because vv = AT / (LAT - AT)). As a consequence the benefit of duplicate analyses (detection of gross errors) and frequent recalibration should be balanced against the negative effect on the delay. [Pg.618]

If the first derivative of the yield with respect to pressure is set equal to zero, an approximation of the maximum will be obtained, provided the second derivative is negative. In this case the second derivative is negative and the predicted maximum is at 62 psia. This calculated value could be high because of experimental error or because the quadratic equation is a poor estimator of the shape of the true surface. [Pg.395]

If the approximation had caused an error of 10% or more, you would not be able to use it. You would have to solve by a more rigorous method, such as the quadratic equation. [Pg.291]

A finite element method based on these functions would have an error proportional to Ax2. The finite element representations for the first derivative and second derivative are the same as in the finite difference method, but this is not true for other functions or derivatives. With quadratic finite elements, take the region from x,.i and x,tl as one element. Then the interpolation would be... [Pg.53]

The basis for this calculation of the amount of nonlinearity is illustrated in Figure 67-1. In Figure 67-la we see a set of data showing some nonlinearity between the test results and the actual values. If a straight line and a quadratic polynomial are both fit to the data, then the difference between the predicted values from the two curves give a measure of the amount of nonlinearity. Figure 67-la shows data subject to both random error and nonlinearity, and the different ways linear and quadratic polynomials fit the data. [Pg.451]

For Solar-System 232Th, 238U, A — (r) = 3.2 Gyr. The error from uncertainties in In K is of order 10 per cent, and that from neglecting the quadratic term in Eq. (10.33) is also of order 10 per cent, but is systematic in the sense that the 3.2 Gyr is an overestimate by about that amount. [Pg.430]

In MPC a dynamic model is used to predict the future output over the prediction horizon based on a set of control changes. The desired output is generated as a set-point that may vary as a function of time the prediction error is the difference between the setpoint trajectory and the model prediction. A model predictive controller is based on minimizing a quadratic objective function over a specific time horizon based on the sum of the square of the prediction errors plus a penalty... [Pg.568]

Tjoa, I. B. and L. T. Biegler. Reduced Successive Quadratic Programming Strategy for Errors-in-Variables Estimation. Comput Chem Eng 16(6) 523-533 (1992). [Pg.581]

The adjustment of measurements to compensate for random errors involves the resolution of a constrained minimization problem, usually one of constrained least squares. Balance equations are included in the constraints these may be linear but are generally nonlinear. The objective function is usually quadratic with respect to the adjustment of measurements, and it has the covariance matrix of measurements errors as weights. Thus, this matrix is essential in the obtaining of reliable process knowledge. Some efforts have been made to estimate it from measurements (Almasy and Mah, 1984 Darouach et al., 1989 Keller et al., 1992 Chen et al., 1997). The difficulty in the estimation of this matrix is associated with the analysis of the serial and cross correlation of the data. [Pg.25]


See other pages where Quadratic error is mentioned: [Pg.200]    [Pg.208]    [Pg.200]    [Pg.208]    [Pg.2335]    [Pg.2337]    [Pg.293]    [Pg.399]    [Pg.334]    [Pg.322]    [Pg.256]    [Pg.114]    [Pg.126]    [Pg.686]    [Pg.121]    [Pg.189]    [Pg.194]    [Pg.201]    [Pg.202]    [Pg.300]    [Pg.224]    [Pg.192]    [Pg.598]    [Pg.57]    [Pg.443]    [Pg.454]    [Pg.457]    [Pg.43]    [Pg.297]    [Pg.568]   
See also in sourсe #XX -- [ Pg.84 ]




SEARCH



Quadratic

© 2024 chempedia.info