Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Error between predictions

This situation was later remedied by Ceulemans et al. [18] who used an extended-Wu potential employing six force constants. For consistency, we shall refer to this model as the 6k model. The model builds upon the 4k model by adding in two extra force constants corresponding to two interaction constants k5 and k6, as shown in Fig. 7. Such a model can correctly predict the frequencies of the flg modes. Using the model, the authors obtained values for the six force constants by minimising the error between predicted and observed mode frequencies. As can be seen in Fig. 7, a much better fit to the data is obtained in this way. [Pg.346]

Figure 9 shows the result of such a fit made by minimising the squares of the errors between predicted and observed frequencies. A problem here is that there are 13 constants to determine and only 14 experimental frequencies known accurately. It is clear that a good fit to the visible modes can be obtained. However, we can never be sure that the fit obtained represents a global minimum in the error function. The data in Fig. 9 must therefore be viewed upon as the best fit currently available. The values of the force constants themselves must be viewed with some scepticism. However, as more data becomes available, i.e., accurate frequencies of some of the silent modes are obtained, the model may be refined to improve these initial estimates. [Pg.349]

The overall accuracy of the quaternary gradient experiments was slightly worse than for the simple binary gradients, but In no case was the error between predicted and observed more than 6%. It was felt, given the complexity of the system, that these errors were sufficiently small for prediction purposes. [Pg.203]

After an initial model has been developed, one should check the autocorrelation of the residuals (error between predicted variable and measured variable). For a good model, the autocorrelation of the residuals should be small. In addition, there should be no cross correlation between the process input and residuals. If either the autocorrelation or cross correlation of the residuals is not small, the model stmcture should be expanded, usually resulting in a more complex model. [Pg.303]

Figure 8 shows the mode shapes of the updated model, corresponding to the experimental ones (Fig. 5), and the correlation with the measured modal behavior. It should be noticed that the updated model represents an excellent approximation of the real structure, with the maximum relative error between predicted and measured modal frequencies being larger than 1 % only for mode B3. [Pg.44]

FIGURE 3.10 Optimization of the parameter C in the equation for permeability estimate from NMR measurements with results of direct permeability measurements. The left three plots show calculated curves with different parameters C and measured permeability data (dots). The graph in the centre presents a summation of absolute errors between predicted and measured data as function of the used value for C curve has a minimum error at C= 14.5 and gives the optimal input for final solution (right plot). Georgi et al. (1997) and Kasap et al. (1999). [Pg.104]

The simulation results at steady state (1700s) were compared with experimental measurements for HDS. The developed model was found to simulate the performance of the bench-scale TBR with high accuracy, obtaining errors in sulfur conversion prediction ranging from -1.13% to +0.56%. Other reactions were also simulated reasonably well within similar error between predicted and experimental concentration. [Pg.256]

In the present work, to take into consideration the inhibiting effect of HjS on HDS reaction. Equation 9.16 was used. The value of was obtained by minimizing the error between predicted and experimental data of sulfur removal. The reaction order for hydrogen nQ was fixed to be 0.5 according to several reports in the literature (Ross, 1965 Mederos, 2010). The value of nii = 0.5 is attributed to the dissociation of H2 onto the catalyst sites. Also, the denominator of Equation 9.16 raised to the seeond power is due to reaction on two consecutive sites over the catalyst surface. [Pg.325]

In a standard back-propagation scheme, updating the weights is done iteratively. The weights for each connection are initially randomized when the neural network undergoes training. Then the error between the target output and the network predicted output are back-propa-... [Pg.7]

The best protection we have against placing an inadequate calibration into service is to challenge the calibration as agressively as we can with as many validation samples as possible. We do this to uncover any weaknesses the calibration might have and to help us understand the calibration s limitations. We pretend that the validation samples are unknowns. We use the calibration that we developed with the training set to predict the concentrations of the validation samples. We then compare these predicted concentrations to the known or expected concentrations for these samples. The error between the predicted concentrations vs. the expected values is indicative of the error we could expect when we use the calibration to analyze actual unknown samples. [Pg.21]

We can also examine these results numerically. One of the best ways to do this is by examining the Predicted Residual Error Sum-of-Squares or PRESS. To calculate PRESS we compute the errors between the expected and predicted values for all of the samples, square them, and sum them together. [Pg.60]

Calculate the sum-squared of errors between the expected and predicted concentrations for the sample that was left out. [Pg.107]

The Standard Error of Prediction (SEP) is supposed to refer uniquely to those situations when a calibration is generated with one data set and evaluated for its predictive performance with an independent data set. Unfortunately, there are times when the term SEP is wrongly applied to the errors in predicting y variables of the same data set which was used to generate the calibration. Thus, when we encounter the term SEP, it is important to examine the context in order to verify that the term is being used correctly. SEP is simply the square root of the Variance of Prediction, s2. The RMSEP (see below) is sometimes wrongly called the SEP. Fortunately, the difference between the two is usually negligible. [Pg.169]

The Root Mean Standard Error of Prediction (RMSEP) is simply the square root of the MSEP. The RMSEP is sometimes wrongly called the SEP. Fortunately, the difference between the two is usually negligible. [Pg.169]

Depending on the particular design of inlet and outlet manifolds, the difference between the flow rates into some parallel micro-channels may be up to 20%. Idealizing the flow rate as uniform can result in significant error in prediction of the temperature distribution of a heated electronic device. [Pg.188]

The computation result yield acetaldehyde concentration as function of time. The value of kinetics parameters, ki, ka, k3 were adjusted to minimize the sum of square of error between the predicted and measured concentration using Hooke Jeeve method [3]. [Pg.223]

The least squares criterion states that the norm of the error between observed and predicted (dependent) measurements 11 y - yl I must be minimal. Note that the latter condition involves the minimization of a sum of squares, from which the unknown elements of the vector b can be determined, as is explained in Chapter 10. [Pg.53]

Y - Xfia. If the number of input variables is greater than the number of observations, there is an infinite number of exact solutions for the least squares or linear regression coefficients, /3a. If the variables and observations are equal, there is a unique solution for /3a, provided that X has full rank. If the number of variables is less than the number of measurements, which is usually the case with process data, there is no exact solution for /3a (Geladi and Kowalski, 1986), but a can be estimated by minimizing the least-squares error between the actual and predicted outputs. The solution to the least-squares approximation problem is given by the pseudoinverse as... [Pg.35]

The g-statistic or square of predicted errors (SPE) is the sum of squares of the errors between the data and the estimates, a direct calculation of variability ... [Pg.55]

Output errors can be especially insidious since the natural tendency of most model users is to accept the observed data values as the "truth" upon which the adequacy and ability of the model will be judged. Model users should develop a healthy, informed scepticism of the observed data, especially when major, unexplained differences between observed and simulated values exist. The FAT workshop described earlier concluded that rt is clearly inappropriate to allocate all differences between predicted and observed values as model errors measurement errors in field data collection programs can be substantial and must be considered. [Pg.161]

Selection of the form of an empirical model requires judgment as well as some skill in recognizing how response patterns match possible algebraic functions. Optimization methods can help in the selection of the model structure as well as in the estimation of the unknown coefficients. If you can specify a quantitative criterion that defines what best represents the data, then the model can be improved by adjusting its form to improve the value of the criterion. The best model presumably exhibits the least error between actual data and the predicted response in some sense. [Pg.48]

To compensate for the errors involved in experimental data, the number of data sets should be greater than the number of coefficients p in the model. Least squares is just the application of optimization to obtain the best solution of the equations, meaning that the sum of the squares of the errors between the predicted and the experimental values of the dependent variable y for each data point x is minimized. Consider a general algebraic model that is linear in the coefficients. [Pg.55]

To minimize /, you balance the error between the setpoint and the predicted response against the size of the control moves. Equation 16.2 contains design parameters that can be used to tune the controller, that is, you vary the parameters until the desired shape of the response that tracks the setpoint trajectory is achieved (Seborg et al., 1989). The move suppression factor A penalizes large control moves, but the weighting factors wt allow the predicted errors to be weighted differently at each time step, if desired. Typically you select a value of m (number of control moves) that is smaller than the prediction horizon / , so the control variables are held constant over the remainder of the prediction horizon. [Pg.570]

Ej random error between yth data point and model prediction... [Pg.635]


See other pages where Error between predictions is mentioned: [Pg.252]    [Pg.252]    [Pg.255]    [Pg.574]    [Pg.578]    [Pg.73]    [Pg.166]    [Pg.252]    [Pg.252]    [Pg.255]    [Pg.574]    [Pg.578]    [Pg.73]    [Pg.166]    [Pg.2208]    [Pg.118]    [Pg.75]    [Pg.55]    [Pg.2573]    [Pg.5]    [Pg.5]    [Pg.48]    [Pg.130]    [Pg.178]    [Pg.304]    [Pg.92]    [Pg.114]    [Pg.385]    [Pg.332]    [Pg.568]    [Pg.34]    [Pg.175]    [Pg.199]    [Pg.380]   
See also in sourсe #XX -- [ Pg.25 ]




SEARCH



Predictable errors

© 2024 chempedia.info