Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Mean multiplicative error

Moog and Jirka (1998) investigated the correspondence of a number of equations with the available data, using the mean multiplicative error, MME ... [Pg.240]

Moog, D. B., and G. H. Jirka. 1995. Analysis of Reaeration Equations Using Mean Multiplicative Error, in Air-Water Gas Transfer, ed. B. Jhhne and E. C. Monahan, ASCE, New York, pp. 101-111. [Pg.469]

Moog and Jirka also found that Equation 9.29, even though it was the best predictor, still had a mean multiplicative error of 1.8. This means that one can expect the predictions of Equation 9.29 to be off the field measurements by either multiplying or dividing by a factor of 1.8. Fifty percent of the predictions will differ by more than this factor, and 50% by less. In addition, they found that below a stream slope of... [Pg.224]

Field measurements are not as precise as laboratory measurements. While this is a true statement, some dedicated field experimentalists have improved the field techniques greatly over recent decades (Tsivoglou and Wallace, 1972 Kilpatrick etal., 1979 Clarke etal., 1994 Hibbsetal., 1998). While the implementation of field studies is still a challenge, the accuracy cannot account for a mean multiplicative error of 1.8. [Pg.224]

Munz, C. and Roberts, P. V. 1984. Analysis of reaeration equations using mean multiplicative error. In Gas Transfer at Water Surfaces. W. Brutzert and G. H. Jirka (Eds). D. Reidel,... [Pg.250]

Cardei and Funt (1999) suggested to combine the output from multiple color constancy algorithms that estimate the chromaticity of the illuminant. Their approach is called committee-based color constancy. By combining multiple estimates into one, the root mean squared error between the estimated chromaticity and the actual chromaticity is reduced. Cardei and Funt experimented with committees formed using the gray world assumption,... [Pg.197]

Figures 11 and 12 illustrate the performance of the pR2 compared with several of the currently popular criteria on a specific data set resulting from one of the drug hunting projects at Eli Lilly. This data set has IC50 values for 1289 molecules. There were 2317 descriptors (or covariates) and a multiple linear regression model was used with forward variable selection the linear model was trained on half the data (selected at random) and evaluated on the other (hold-out) half. The root mean squared error of prediction (RMSE) for the test hold-out set is minimized when the model has 21 parameters. Figure 11 shows the model size chosen by several criteria applied to the training set in a forward selection for example, the pR2 chose 22 descriptors, the Bayesian Information Criterion chose 49, Leave One Out cross-validation chose 308, the adjusted R2 chose 435, and the Akaike Information Criterion chose 512 descriptors in the model. Although the pR2 criterion selected considerably fewer descriptors than the other methods, it had the best prediction performance. Also, only pR2 and BIC had better prediction on the test data set than the null model. Figures 11 and 12 illustrate the performance of the pR2 compared with several of the currently popular criteria on a specific data set resulting from one of the drug hunting projects at Eli Lilly. This data set has IC50 values for 1289 molecules. There were 2317 descriptors (or covariates) and a multiple linear regression model was used with forward variable selection the linear model was trained on half the data (selected at random) and evaluated on the other (hold-out) half. The root mean squared error of prediction (RMSE) for the test hold-out set is minimized when the model has 21 parameters. Figure 11 shows the model size chosen by several criteria applied to the training set in a forward selection for example, the pR2 chose 22 descriptors, the Bayesian Information Criterion chose 49, Leave One Out cross-validation chose 308, the adjusted R2 chose 435, and the Akaike Information Criterion chose 512 descriptors in the model. Although the pR2 criterion selected considerably fewer descriptors than the other methods, it had the best prediction performance. Also, only pR2 and BIC had better prediction on the test data set than the null model.
Estimating the Mean Prediction Error with Two Random Effects Another approach to estimating the mean prediction error that accounts for multiple observations in the same individual has recently been proposed. Here the Cl is constructed under the statistical model... [Pg.239]

Even though many different covariates may be collected in an experiment, it may not be desirable to enter all these in a multiple regression model. First, not all covariates may be statistically significant—they have no predictive power. Second, a model with too many covariates produces models that have variances, e.g., standard errors, residual errors, etc., that are larger than simpler models. On the other hand, too few covariates lead to models with biased parameter estimates, mean square error, and predictive capabilities. As previously stated, model selection should follow Occam s razor, which basically states the simpler model is always chosen over more complex models. ... [Pg.64]

B-VWN and B-LYP, on the other hand, perform very well and give binding energies with very small mean error. For B-LYP, the mean error is 1.0 kcal/mol and the mean absolute error is 5.6 kcal/mol. The molecules which are underbound at this theoretical level are the simple hydrides with lone-pair electrons (H2O and NH3, for example). On the other hand, molecules with multiple bonds, as well as H2O2 and F2, tend to be overbound. The low mean error is partially due to the fairly small basis set used B-LYP theory leads to some overbinding with a large basis. [Pg.207]

We have discussed residual e, analysis in other chapters, so we will not spend a lot of time revisiting this. Two forms of residual analyses are particularly valuable for use in multiple regression semi-Studentized and Studentized residuals. A semi-Studentized residual, ej, is the ith residual value divided by the square root of the mean square error. [Pg.335]

Asparagine and glutamine can be measured simultaneously by NIR spectroscopy with standard errors of prediction of only 0.18 mM and 0.10 mM as well as mean percent errors of 2.50% and 2.00%, respectively. This level of analysis is possible inspite of the chemical and spectroscopic similarities between these two amino acids. The successfiil measurement of these confounds suggests that NIR spectroscopy is capable of excellent selectivity which will be critical for further expanding this methodology for measuring multiple conq)onents in the complex matrices associated with bioreactors. [Pg.131]

Root mean squared error (RMSE) and squared correlation coefficient (/P) are shown for the COSMOquick approach using multiple reference solvents (multiple reference) and using a free energy of fusion estimate via the melting point and enthalpy of fusion quantitative structure-property relationship (QSPR) models. [Pg.224]

Since a neural network can arrive at different solutions for the same data, if different values of the initial network weights are provided, the network should be trained several times. The goal is to try and find a neural network model for which multiple trainings approach the same final mean squared error (MSE). [Pg.437]

Computational issues that are pertinent in MD simulations are time complexity of the force calculations and the accuracy of the particle trajectories including other necessary quantitative measures. These two issues overwhelm computational scientists in several ways. MD simulations are done for long time periods and since numerical integration techniques involve discretization errors and stability restrictions which when not put in check, may corrupt the numerical solutions in such a way that they do not have any meaning and therefore, no useful inferences can be drawn from them. Different strategies such as globally stable numerical integrators and multiple time steps implementations have been used in this respect (see [27, 31]). [Pg.484]


See other pages where Mean multiplicative error is mentioned: [Pg.222]    [Pg.223]    [Pg.222]    [Pg.223]    [Pg.42]    [Pg.477]    [Pg.181]    [Pg.151]    [Pg.320]    [Pg.346]    [Pg.347]    [Pg.760]    [Pg.335]    [Pg.435]    [Pg.99]    [Pg.42]    [Pg.53]    [Pg.302]    [Pg.373]    [Pg.416]    [Pg.82]    [Pg.255]    [Pg.33]    [Pg.142]    [Pg.1279]    [Pg.90]    [Pg.90]    [Pg.207]    [Pg.234]    [Pg.300]    [Pg.374]    [Pg.471]    [Pg.3939]    [Pg.1344]    [Pg.115]   
See also in sourсe #XX -- [ Pg.241 ]




SEARCH



Mean error

© 2024 chempedia.info