Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Model selection criteria

Forbes, F. T. Marlin and J. MacGregor. Model Selection Criteria for Economics-Based Optimizing Control. Comput Chem Eng 18 497-510 (1994). [Pg.580]

Magnanti, T. L., and Wong, R. T. Acclerated Benders Decomposition Algorithm Enhancement and Model Selection Criteria, Open Res. 29, 464-484 (1981). [Pg.243]

Common modifications to SSE and MSE lead to a class of metrics called discrimination functions. These functions, like the Akaike Information Criteria (AIC), are then used to choose between competing models. One problem with functions like the AIC and MSE is that the actual value of the function is impossible to interpret without some frame of reference. For instance, how can one interpret a MSE or an AIC of 45 Is that a good or bad Further, some discrimination functions are designed to be maximized whereas others are designed to be minimized. In this book, the model with the smallest discrimination function is superior to all other models having the same number of estimable parameters, unless otherwise noted. This class of functions will be discussed in greater detail in the section on Model Selection Criteria. [Pg.16]

In the pharmacokinetic literature, the F-test and AIC are often presented as independent model selection criteria with the F-test being used with nested models and the AIC being used with non-nested models. Using these criteria in this manner is not entirely appropriate. The AIC and F-test are not independent nor is the AIC limited to non-nested models. To see how the AIC and F-test are related, consider two models with n-observations. The full model has f-degrees of freedom and residual sum of squares SSEf. The reduced model has r-degrees of freedom and residual sum of squares SSEr. The F-test comparing the reduced model to the full model is then... [Pg.27]

AUen, T. V., and Greiner, R. (2000). Model selection criteria for learning belief nets An empirical comparison. In Proc. 17th Int. Conf. Machine Learning, pp. 1047-1054. [Pg.279]

Another example for an in-depth accident data base is the Pedestrian Crash Data Study (PCDS) from the US [29] (which is also described in Sect. 5.2.1) or accident investigations carried out by vehicle manufacturers. The latter ones have a very high level of detail but suffer even more from biases due to low case numbers, model selection criteria or geographic effects [16]. [Pg.26]

T. L. Magnanti and R. T. Wong. Accelerating benders decomposition algorithmic enhancement and model selection criterion. Oper. Res., 29(3) 464,1981. [Pg.445]

Fitness function. A model selection criterion used to compare models with a different number of p variables and n objects ... [Pg.370]

Mallows Cp. Model selection criterion used to compare biased regression models with the full least squares regression model ... [Pg.370]

Akaike Information Criterion, AICp. A model selection criterion for choosing between models with different parameters and defined as ... [Pg.371]

With GAM the data (covariate and individual Bayesian PM parameter estimates) would be subjected to a stepwise (single-term addition/deletion) modeling procedure. Each covariate is allowed to enter the model in any of several functional representations. The Akaike information criterion (AIC) is used as the model selection criterion (22). At each step, the model is changed by addition or deletion of a covariate that results in the largest decrease in the AIC. The search is stopped when the AIC reached a minimum value. [Pg.389]

Ludden TM, Beal SL, Sheiner LB. Comparison of the Akaike Information Criterion, the Schwarz criterion and the F test as guides to model selection. /PAar-macokinet Biopharm 1994 22 431-45. [Pg.525]

If the structure of the models is more complex and we have more than one independent variable or we have more than two rival models, selection of the best experimental conditions may not be as obvious as in the above example. A straightforward design to obtain the best experimental conditions is based on the divergence criterion. [Pg.192]

Hurvich, C. and C. L. Tsai. A Corrected Akaike Information Criterion for Vector Autoregressive Model Selection. J Time Series Anal 14, 271-279 (1993). [Pg.104]

An important point is the evaluation of the models. While most methods select the best model at the basis of a criterion like adjusted R2, AIC, BIC, or Mallow s Cp (see Section 4.2.4), the resulting optimal model must not necessarily be optimal for prediction. These criteria take into consideration the residual sum of squared errors (RSS), and they penalize for a larger number of variables in the model. However, selection of the final best model has to be based on an appropriate evaluation scheme and on an appropriate performance measure for the prediction of new cases. A final model selection based on fit-criteria (as mostly used in variable selection) is not acceptable. [Pg.153]

The Akaike Information Criterion is used to select the model which minimises the AIC(0) function for a specified value of (j). In the original formulation of the above equation, Akaike used a value of (j) = 2 but an alternative selection criterion proposed by Leontaritis and Billings [Leontaritis and Billings, 1987] is based on a value of (j) = 4. [Pg.111]

In the case of iterative medium-throughput screening, at any given point in the process, the set of molecules that have been screened thus far is the previously selected set for the next round of screening. In choosing molecules for the next iteration, one may have a selection criterion such as predictive model scores but a diversity criterion may also be applied it is not desirable to screen something identical, or nearly identical, to that which was screened in previous rounds. [Pg.82]

In this chapter, we discuss the choice of screening designs for model selection via the three elements of the design matrix D, the model /, and a criterion based on X. One important feature about the design screening problem is that the true model is usually unknown. If we denote the set of all possible models that might be fitted by T = [f1,..., / , where u is the number of all possible models, then the optimality criterion for design selection should be based on all possible models, rather than on a specific model in T. [Pg.210]

Models can be generated using stepwise addition multiple linear regression as the descriptor selection criterion. Leaps-and-bounds regression [10] and simulated annealing (ANNUN) can be used to find a subset of descriptors that yield a statistically sound model. The best descriptor subset found with multiple linear regression can also be used to build a computational neural network model. The root mean square (rms) errors and the predictive power of the neural network model are usually improved due to the higher number of adjustable parameters and nonlinear behavior of the computational neural network model. [Pg.113]

One criterion often used for selecting the principal components is to use only those components which correspond to eigenvalues greater that one.[9] Another criterion is to use as many components as necessary to describe a specified amount, say 80 %, of the total sum of squares. We will not use any of these criteria to determine the principal components. Instead, we shall use a criterion based on cross validation throughout this book. The cross validation criterion determines the optimum number of components to ensure a maximum reliabililty in the prediction by the model. This criterion is discussed below, after a discussion of how a principal component model can be determined by a step-wise procedure. [Pg.361]

A common form of model selection is to maximize the likelihood that the data arose under the model. For non-Bayesian analysis this is the basis of the likelihood ratio test, where the difference of two -2LL (where LL denotes the log-Ukelihood) for nested models is assumed to be approximately asymptotically chi-squared distributed. A Bayesian approach— see also the Schwarz criterion (36)—is based on computation of the Bayesian information criterion (BIC), which minimizes the KuUback-Leibler KL) information (37). The KL information relates to the ratio of the distribution of the data given the model and parameters to the underlying true distribution of the data. The similarity of the KL information expression (Eq. (5.24)) and Bayes s formula (Eq. (5.1)) is easily seen ... [Pg.154]

It is most likely the critical line asymptotes observed for the conventional types II, III, and IV (Figure 8) should end not at the infinite pressure but at the pure component second (or third) critical point. To study the possibility of continuous critical line path from stable critical point of one component to metastable critical point of other component the t5 e III of phase behavior was chosen. The selection criterion of thermod5mamic model parameters for t5 e III was extracted from global phase diagram for the binary van der Waals mixture... [Pg.228]

Still, even with following Wagner s guidelines it may be that many different models fit the data equally well. The situations becomes more complex when variables other than time are included in the model, such as in a population pharmacokinetic analysis. Often then it is of interest to compare a number of different models because the analyst is unclear which model is the more appropriate model when different models fit almost equally to the same data. For example, the Emax model is routinely used to analyze hyperbolic data, but it is not the only model that can be used. A Weibull model can be used with equal success and justification. One method that can be used to discriminate between rival models is to run another experiment, changing the conditions and seeing how the model predictions perform, although this may be unpractical due to time or fiscal constraints. Another alternative is to base model selection on some a priori criterion. [Pg.21]

First of all, the decision must be made whether and where models are to be apphed and what types of model (e.g., detailed, parsimonious) could be used. The most important selection criterion is the required accuracy of the results if there is demand for very accurate and detailed model results, a more sophisticated model has to be applied, and relevant data have to be collected accordingly (Hpjberg et al., 2006). Important aspects should be uncertainty assessment and quality assurance. [Pg.188]

Bozdogan, H. (1987) Model selection and Akaike s information criterion (AiC) the generai theory and its analytical extensions. Psychometrika 52, 345-370. [Pg.418]


See other pages where Model selection criteria is mentioned: [Pg.32]    [Pg.360]    [Pg.144]    [Pg.32]    [Pg.907]    [Pg.153]    [Pg.161]    [Pg.21]    [Pg.912]    [Pg.321]    [Pg.267]    [Pg.268]    [Pg.176]    [Pg.236]    [Pg.28]    [Pg.64]    [Pg.395]    [Pg.458]    [Pg.518]    [Pg.291]    [Pg.138]    [Pg.379]    [Pg.51]    [Pg.78]    [Pg.53]    [Pg.308]    [Pg.121]    [Pg.215]    [Pg.217]    [Pg.225]    [Pg.228]    [Pg.230]    [Pg.397]    [Pg.16]    [Pg.113]    [Pg.153]    [Pg.1603]    [Pg.193]    [Pg.280]    [Pg.267]    [Pg.268]    [Pg.363]    [Pg.374]   
See also in sourсe #XX -- [ Pg.16 ]




SEARCH



Model selection

Modeling selecting models

Models criteria

Multiple Criteria Optimization Models for Supplier Selection Incorporating Risk

Select Criteria

Selection criteria

Selectivity criteria

© 2024 chempedia.info