Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Ranking model quality

The overall ranking model quality, i.e. taking into account all the R responses, can be evaluated by the following expressions ... [Pg.201]

The overall ranking model quality, i.e. taking into account all the four responses, has been evaluated from the above parameters by arithmetic mean (QT), geometric mean (0G) and by the minimum value obtained on the four responses (QM) ... [Pg.212]

The results show that, for the set of 55 halocarbons, a very high level of ab initio calculation was required before there was any significant improvement in model quality over and above the models derived from easily calculable TS and TC descriptors. When ranking the TS + TC + 3-D + cc-pVTZ model descriptors... [Pg.491]

Depending on the quality of data and the method selected, constraints on the parameters to be estimated may be required in order to get a chemically meaningful solution. In the case of multivariate curve resolution (MCR) (see Section 3.2) performed on one 2D NMR spectrum, application of constraints is mandatory. If constraints are not applied, it can be shown that there is an infinity of equally well-fitting solutions and hence the true underlying parameters (spectra, concentrations) cannot be estimated directly. This is known as the rotational ambiguity of two-way low-rank models. [Pg.214]

Variable subset selection is performed by GAs, optimising populations of models according to a defined objective function related to model quality. In partial ranking models objective function is an expression of the degree of agreement between the element ranking resulting from experimental attributes and that provided by the selected subset of model attributes. [Pg.189]

In Eqs. 5.6 and 5.7, LL is fhe log-likelihood, k the number of model parameters, and n the number of cases. Lower values of both AIC and BIC indicate improved model fit [38], However, they both lack a normalized scale, so low values have to be seen in relation to models in comparison [38], These relative differences in AIC and BIC are useful in ranking models with respect to predictive quality despite different numbers of model parameters. Further indications on relative differences in BIC and their meaning for variable selection is included in [39]. The BIC is clearly related to AIC, but it has a stronger emphasis on parsimony or over-fitting penalty. [Pg.100]

Category or rank given to entities having the same functional use but different requirements for quality (ISO 8402) e.g. hotels are graded by star rating, automobiles are graded by model. [Pg.557]

Iman RL, Helton JC, Campbell JE. An approach to sensitivity analysis of computer models Part II—Ranking of input variables, response surface validation, distribution effect and technique synopsis. / Quality Technol 1981 13 232-40. [Pg.101]

VolSurf was also successfully applied in the literature to predict absorption properties [156] from experimental drug permeability data of 55 compounds [165] in Caco-2 cells (human intestinal epithelial cell line derived from a colorectal carcinoma) and MDCK cell monolayers (Madin-Darby canine kidney). In this interesting case, it was shown that models including counterions for charged molecules clearly show significantly better quality and overall performance. The final model was also able to correctly predict, to a great extent, the relative ranking of molecules from another Caco-2 permeability study by Yazdanian et al. ]166]. [Pg.353]

The test of the quality of a PLS model should always be done. Quality of the model is often determined by A, the rank, so a rank determination is included in the quality testing. For testing a PLS model, samples of known composition that were left outside the calibration set are used. By using the predicted value for these samples,... [Pg.408]

Other important applications include the generation of a model to predict thermodynamic water solubility (Cruciani et al. 2003). This model is based on consistent solubility data from literature plus additional measurements for 970 compounds. Its quality allows to differentiate between very poorly/poorly/medium/ highly and very highly soluble molecules while exact rankings within individual classes are not possible. However, given the different factors influencing experimental thermodynamic solubility data, it is not likely that significantly improved models for this key property in pharmaceutical sciences can be derived. [Pg.418]

A9.3.6.2.2 Pedersen et al (1995) provides a data quality-scoring system, which is compatible with many others in current use, including that, used by the US-EPA for its AQUIRE database. See also Mensink et al (1995) for discussions of data quality. The data quality scoring system described in Pedersen et al. includes a reliability ranking scheme, which can be a model for use with in classifying under the harmonized scheme. The first three levels of data described by Pedersen are for preferred data. [Pg.459]

From the population, pairs of models are selected (randomly or with a probability proportional to their quality). Then, for each pair of models the common characteristics are preserved (i.e. variables excluded in both models remain excluded, variables included in both models remain included). For variables included in one model and excluded from the other, a random number is tried and compared with the crossover probability pc if the random number is lower than the cross-over probability, the excluded variable is included in the model and vice versa. Finally, the statistical parameter for the new model is calculated if the parameter value is better than the worst value in the population, the model is included in the population, in the place corresponding to its rank otherwise, it is no longer considered. This procedure is repeated for several pairs (for example 100 times). [Pg.469]

This is the dilemma—sensory properties should rank very high, but they don t because we lack the tools to measure them effectively. For the most part, these quality measures are subjective rather than objective, and frequently they require direct testing with consumers to determine efficacy of a particular product attribute. So the issue is really a lack of physical measurement tools that directly assess the performance measures important to the consumer of the product. The lack of objective performance measures and unknown mechanistic equations also makes mathematical modeling very difficult for addressing quality problems. [Pg.1361]

An unbiased method to evaluate the quality of ranks for true positives is to compute the so-called ROC curves. The curves are obtained by plotting the number of true positives against the number of false positives in the score-ordered list. The model screening performance is most often evaluated numerically as the Area Under Curve, or AUC (43). However, the lack of focus on the low-scoring part of the ranked list, in our humble opinion, makes ROC AUC an inferior optimization function to the rank-square-root or log-AUC function. ROC AUC may still be a good function to report and compare screening performance. [Pg.273]


See other pages where Ranking model quality is mentioned: [Pg.201]    [Pg.201]    [Pg.49]    [Pg.409]    [Pg.183]    [Pg.201]    [Pg.317]    [Pg.757]    [Pg.394]    [Pg.424]    [Pg.140]    [Pg.134]    [Pg.697]    [Pg.413]    [Pg.285]    [Pg.753]    [Pg.79]    [Pg.1]    [Pg.178]    [Pg.142]    [Pg.221]    [Pg.417]    [Pg.193]    [Pg.98]    [Pg.326]    [Pg.7]    [Pg.444]    [Pg.615]    [Pg.273]    [Pg.458]    [Pg.207]    [Pg.273]    [Pg.406]    [Pg.10]   
See also in sourсe #XX -- [ Pg.201 ]




SEARCH



Rank

Ranking

Ranking model

© 2024 chempedia.info