Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Performance assessment, model selection

Figure 10. Model selection and assessment diagnostic performance measure S for random forest and partial least squares (PLS) methods applied to the BBB data for various percentages of the data (Ptrain) in the training set. Figure 10. Model selection and assessment diagnostic performance measure S for random forest and partial least squares (PLS) methods applied to the BBB data for various percentages of the data (Ptrain) in the training set.
The following example is based on a risk assessment of di(2-ethylhexyl) phthalate (DEHP) performed by Arthur D. Little. The experimental dose-response data upon which the extrapolation is based are presented in Table II. DEHP was shown to produce a statistically significant increase in hepatocellular carcinoma when added to the diet of laboratory mice (14). Equivalent human doses were calculated using the methods described earlier, and the response was then extrapolated downward using each of the three models selected. The results of this extrapolation are shown in Table III for a range of human exposure levels from ten micrograms to one hundred milligrams per day. The risk is expressed as the number of excess lifetime cancers expected per million exposed population. [Pg.304]

An important aspect of variable selection that is often overlooked is the hazard brought about through the use of cross-validation for two quite different purposes namely (1) as an optimization criterion for variable selection and other model optimization tasks (including selection of the optimal number of PLS LVs or PCR PCs) and (2) as an assessment of the quality of the final model built using all samples. In this case, one can get highly optimistic estimates of a model s performance, because the same criterion is used to both optimize and evaluate the model. As a result, when doing variable selection, especially with a limited number of calibration samples, it is advisable to do an additional outer loop cross-validation across the entire model... [Pg.424]

The first case-study selected for assessment of process performance is the bulk separation of a 50/50 H2(4)/CH4(R) gas mixture using a PSA with activated carbon as the adsorbent, coupled to a membrane with selectivity (Xab - 35. The values of the parameters of the PSA model are given in and are not reproduced here to save space . The standalone models for the membrane and PSA unit were validated by comparing numerical results with experimental data reported in the literature . For the separation under study the integrated system attained the CSS after the 11 cycle from startup for all runs. Unless otherwise stated, the operating conditions considered are Ph/Pl = 35/1.2, = 2500cm F(total feed/cycle) =... [Pg.357]

In contrast to the hypothesis testing style of model selection/discrimination, the posterior predictive check (PPC) assesses the predictive performance of the model. This approach allows the user to reformulate the model selection decision to be based on how well the model performs. This approach has been described in detail by Gelman et al. (27) and is only briefly discussed here. PPC has been assessed for PK analysis in a non-Bayesian framework by Yano et al. (40). Yano and colleagues also provide a detailed assessment of the choice of test statistics. The more commonly used test statistic is a local feature of the data that has some importance for model predictions for example, the maximum or minimum concentration might be important for side effects or therapeutic success (see Duffull et al. (6)) and hence constitutes a feature of the data that the model would do well to describe accurately. The PPC can be defined along the fines that posterior refers to conditioning of the distribution of the parameters on the observed values of the data, predictive refers to the distribution of future unobserved quantities, and check refers to how well the predictions reflect the observations (41). This method is used to answer the question Does the observed data look plausible under the posterior distribution This method is therefore solely a check of internal consistency of the model in question. [Pg.156]

To alleviate this biased estirrratiorL resampling methods, such as cross-validation and bootstrapping, can be employed to more accurately estimate prediction error. In the next sections, these techniques are described as well as the impUcations of their use in the framework of model selection and performance assessment. [Pg.224]

TABLE 10.1 Lymphoma Data Results Using Split-Sample Approach for Model Selection and Performance Assessment... [Pg.228]

Dudoit, S., and van der Laan, M. J. (2005). Asymptotics of cross-validated risk estimation in model selection and performance assessment. Stat MethodoL, 2(2) 131-154. [Pg.247]

As follows from Table 13.4, in all cases the value which characterizes the external predictive performance, is lower than the value q computed using the internal cross-validation. This means that the use of only two adjustable hyperparameters may cause the model selection bias . Almost in all cases the value q lies between and q. It is interesting to note that q for CMF models are usually higher than for CoMFA and CoMSIA models. The predictive performance assessed using the external 5-fold cross-validation procedure is especially high for ACE, DHFR and THR. [Pg.444]


See other pages where Performance assessment, model selection is mentioned: [Pg.4]    [Pg.122]    [Pg.476]    [Pg.563]    [Pg.94]    [Pg.2806]    [Pg.440]    [Pg.192]    [Pg.496]    [Pg.238]    [Pg.38]    [Pg.220]    [Pg.222]    [Pg.222]    [Pg.223]    [Pg.224]    [Pg.227]    [Pg.228]    [Pg.228]    [Pg.83]    [Pg.134]    [Pg.433]    [Pg.716]    [Pg.376]    [Pg.93]    [Pg.2856]    [Pg.491]    [Pg.1479]    [Pg.413]    [Pg.516]    [Pg.44]    [Pg.453]    [Pg.505]    [Pg.350]   


SEARCH



Model assessment

Model selection

Modeling selecting models

Performance modeling

Performance models

© 2024 chempedia.info