Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Principle of model parsimony

In recent years, there has been a re-appreciation of the work of Jeffreys on the application of Bayesian methods [121], especially due to the expository publications of E.T. Jaynes [118,120], In particular, the Bayesian approach to model class selection has been further developed by showing that the evidence for each model class provided by the data (that is, the probability of getting the data based on the whole model class) automatically enforces a quantitative expression of a principle of model parsimony or of Ockham s razor [98,164,242], There is no need to introduce any ad-hoc penalty term as was done in some of the earlier work on this problem. [Pg.214]

Jeffreys 1939). Early quantitative forms for a Principle of Model Parsimony utilized a penalty term against using a larger number of uncertain (adjustable) parameters in combination with a quantification of the model data-fit based on the log likelihood of the optimal model in the model class (Akaike 1974) however, the form of this penalty term did not have a very rigorous basis. Subsequent work made it clear that Bayes Theorem at the model class level automatically enforces model parsimony without ad-hoc penalty terms (Gull 1989, Mackay 1991, Beck Yuen 2004). [Pg.416]

Bayesian methods for model updating and model class selection can be used to study systems which are essentially unidentifiable using classical system identification approaches. Additionally, viewing the problem of model class selection in a Bayesian context allows for a quantitative form for a Principle of Model Parsimony with an information-theoretic interpretation of model complexity (it relates to the amount of information extracted from the data by the model class). [Pg.424]

In the context of model class selection, the principle of model parsimony expresses that simpler models that are reasonably consistent with the observed data are preferable to more complicated models (e.g., with more modeling parameters), even though the more complex models might result in a better data fit. In other words, the selected class of models should be able to reproduce the observed system behavior closely, but should otherwise be as simple as possible. This principle of model simplicity is also often referred to as Occam s razor and is pursued in MCS for two reasons firstly, the achieved data fit improvement is in most cases only marginal, and secondly, an over-parameterized model may lead to an over-fit of the data. This often results in relatively poor predictions made by the model, due to the high sensitivity of the model to small changes in the data. [Pg.1526]

Distance matrix methods simply count the number of differences between two sequences. This number is referred to as the evolutionary distance, and its exact size depends on the evolutionary model used. The actual tree is then computed from the matrix of distance values by running a clustering algorithm that starts with the most similar sequences (i.e., those that have the shortest distance between them) or by trying to minimize the total branch length of the tree. The principle of maximum parsimony searches for a tree that requires the smallest number of changes to explain the differences observed among the taxa under study. [Pg.345]

The principle of parsimonious parametrization should be applied, according to which the model with as few parameters as possible should be selected this is in accordance with the Wheeler recommendation the best model is the simplest one that works . This is also a guideline for making increasingly complex models stop when the model is sufficiently complex to get an adequate fit of the data. [Pg.551]

By slowly increasing the complexity of the models in this fashion, it was hoped that a model could be obtained that was just sufficiently complex to allow an adequate fit of the data. This conscious attempt to select a model that satisfies the criteria of adequate data representation and of minimum number of parameters has been called the principle of parsimonious parameterization. It can be seen from the table that the residual mean squares progressively decrease until entry 4. Then, in spite of the increased model complexity and increased number of parameters, a better fit of the data is not obtained. If the reaction order for the naphthalene decomposition is estimated, as in entry 5, the estimate is not incompatible with the unity order of entry 4. If an additional step is added as in entry 6, no improvement of fit is obtained. Furthermore, the estimated parameter for that step is negative and poorly defined. Entry 7 shows yet another model that is compatible with the data. If further discrimination between these two remaining rival models is desired, additional experiments must be conducted, for example, by using the model discrimination designs discussed later. The critical experiments necessary for this discrimination are by no means obvious (see Section VII). [Pg.121]

Table XII would indicate that the effect of water is not described adequately by the model. Utilizing the principle of parsimonious parameterization, one can consider both water and carbon dioxide to be adsorbed and oxygen to be nonadsorbed, resulting in the three-parameter model 4. The residuals in Table XII for model 4, however, are correlated with the oxygen level. Hence model 5 would perhaps be preferable, for it likewise contains only three parameters while allowing adsorbed oxygen. The random residuals of Table XII for model 5 indicate that this model cannot be rejected using the... Table XII would indicate that the effect of water is not described adequately by the model. Utilizing the principle of parsimonious parameterization, one can consider both water and carbon dioxide to be adsorbed and oxygen to be nonadsorbed, resulting in the three-parameter model 4. The residuals in Table XII for model 4, however, are correlated with the oxygen level. Hence model 5 would perhaps be preferable, for it likewise contains only three parameters while allowing adsorbed oxygen. The random residuals of Table XII for model 5 indicate that this model cannot be rejected using the...
The belief is that the statistical method used (such as PLS, PCR, MLR, PCA, ANNs) will extract from the data those variables which are most important, and discard irrelevant information. Statistical theory shows that this is incorrect. In particular, the principle of parsimony states that a simple model (one with fewer variables or parameters), if it is just as good at predicting a particular set of data as a more complex model, will tend to be better at predicting a new, previously unseen data set [153-155]. Our work has shown that this principle holds. [Pg.106]

The choice of the right model to use to describe experimental results is one of the trickiest, and most interesting, tasks in scientific work, and this is a subject that can only be touched on here. As discussed above, we are guided by the Principle of Parsimony, that in science one should seek the simplest explanation for phenomena. In the present context, that means that we should define models with as few parameters as possible, consistent with obtaining a satisfactory description of the data. This is a sensible approach, because if a simple model fits the data adequately, then so necessarily must more complicated versions of that model. It follows that experimental observations can only serve to rule out models, often, but not always, because they are oversimplified the data can never prove that a model is correct The question naturally arises at this stage about how one can establish whether or not a model is successful in accounting for the data. There are several criteria for assessing the quality of a model. [Pg.324]

Simplest Base Model Is Backbone. The simplest base model that characterizes the underlying patterns in the data should form the backbone for developing a population model. The principle of simplicity stipulates that models with the minimum number of parameters should be used. This is called the parsimony principle. The model with the smallest number of parameters that describe the data well is the most parsimonious model. [Pg.229]

The next two properties, appropriate level of detail and as simple as possible, are two sides of the same coin because model detail increases at the expense of simplicity. Modeler s refer to this aspect of model development as Occam s razor. Formulated by William of Occam in the late Middle ages in response to increasingly complex theories being developed without an increase in predictability, Occam s razor is considered today to be one of the fundamental philosophies of modeling—the so-called principle of parsimony (Domingos, 1999). As ori-... [Pg.7]

Principle of parsimony (Occam s Razor William of Ockham, 1285-1349/ 1350, English philosopher and logician). All things being (approximately) equal, one should accept the simplest model. [Pg.545]

Thus, as expected, the design, operation, and performance of the EKR system are not easy. Mathematical models are necessary in order to gain a better understanding of the processes that occurs in the EKR and to allow predictions for the field-scale remediation. Generally, it is a good policy to keep the mathematical model as simple as possible while adequately describing the behavior of the main parameters of the system (principle of parsimony). Thus, models with relatively simple transport equations and few equilibrium equations are able to predict the evolution of parameters such as the rate of recovery of the toxic ion, the maximum recovery, the rate of acid addition, and the energy requirements. The equation of mass conservation for a pore water solute species (e.g. an ion) in an EKR system can be expressed as follows ... [Pg.540]

Recent research has shown that judicious variable selection can improve statistical predictions of models (Baroni et al, 1992 Brereton, 1995 Brereton and Elbergali, 1994 Broadhurst et aL, 1997 Brown, 1993 Cruciani and Watson, 1994 Defalguerolles and Jmel, 1993 Hazen, Arnold and Small, 1994 Heikka, Minkkinen and Taavitsainen, 1994 Kubinyi, 1994a, b, 1996 Lindgren et aL, 1995 Norinder, 1996 Shaw et al, 1996,1997 Sreerama and Woody, 1994) statistical theory, in particular the parsimony principle... [Pg.317]

The principle of parsimony (de Noord, 1994 Flury and Riedwyl, 1988 Seasholtz and Kowalski, 1993) states that if a simple model (that is, one with relatively few parameters or variables) fits the data then it should be preferred to a model that involves redundant parameters. A parsimonious model is likely to be better at prediction of new data and to be more robust against the effects of noise (de Noord, 1994). Despite this, the use of variable selection is still rare in chromatography and spectroscopy (Brereton and Elbergali, 1994). Note that the terms variable selection and variable reduction are used by different researchers to mean essentially the same thing. [Pg.359]

The most important task for a theoretician is the choice of a model. It may seem a paradox to emphasize that taking the largest model may not be recommended, even after a presentation of the codes with the implicit desire to be able to calculate the largest possible systems. The main contribution of a theoretician is to help analysis, which is by nature a simplification rather than to produce numbers. In this regard, a model does not have to be complete and sophisticated it should be constructed so as to explore the strict necessary causes of a phenomenon. Ockham s razor is a principle of parsimony that recommends selecting the fewest number of assumptions that provides the correct answer. Nevertheless, if one wants to reach the limit of the state of the art, the improvement of the model should prevail over that of the technique. Physics should not depend on the technique provided that mistakes are avoided. In the following, we want to outline some constraints in the choice of models for oxides. [Pg.196]

Alternatively, criteria can be estimated for each model based on the principle of parsimony, that is, all else being equal, select the simplest model. The Akaike Information Criterion (AIC) is one of the most widely used information criterion that combines the model error sum of squares and the number of parameters in the model. [Pg.272]

To examine the quality of model predictions to observed data, in addition to visual inspection, various statistical tests on residuals are available to check for presence of systematic misfitting, nonrandomness of the errors, and accordance with assumed experimental noise. Model order estimation, that is, number of compartments in the model, is also relevant here, and for linear compartmental models, criteria such as F-test, and those based on the parsimony principle such as the Akaike and Schwarz criteria, can be used if measurement errors are Gaussian. [Pg.173]

If the value F exceeds the tabulated value of Fischer s F-distribution for (F - F ) and (N - F ) degrees of freedom, and a confidence level of (usually) 95% (a = 0.05), then the complex model fits significantly better to the data than the simpler model. If not, the parsimony principle dictates that the simpler model should be accepted as the best model. [Pg.349]

These principles are phrased in the language of the ionic model, but they provide a simpler and more explicit description of stable structures than that given by the ionic model s energy minimization principle. Among the important ideas captured by Pauling s rules are those of local charge neutrality, the definition of electrostatic bond strength, and the rule of parsimony which is closely... [Pg.8]

If a component model of the raw data requires, say, R + 1 components to describe the data well, whereas a model of the centered data requires only R components, then centering is sensible because the model of the centered data only has R(I + J) + J parameters. The J parameters pertain to the calculated averages, assuming that centering is performed across the first mode. The alternative of fitting the (R + l)-component model to the raw data would lead to a model with (R + 1)(7 + J) parameters and thus would violate the parsimony principle [Judge et al. 1985, Seasholtz Kowalski 1993, Weinberg 1964],... [Pg.229]


See other pages where Principle of model parsimony is mentioned: [Pg.22]    [Pg.22]    [Pg.149]    [Pg.275]    [Pg.521]    [Pg.301]    [Pg.324]    [Pg.154]    [Pg.724]    [Pg.102]    [Pg.214]    [Pg.59]    [Pg.76]    [Pg.49]    [Pg.194]    [Pg.24]    [Pg.113]    [Pg.26]    [Pg.117]    [Pg.18]    [Pg.379]    [Pg.341]    [Pg.327]   
See also in sourсe #XX -- [ Pg.214 ]

See also in sourсe #XX -- [ Pg.416 , Pg.424 ]




SEARCH



Modeling principles

Parsimonious model

Parsimony

Principle of Parsimony

Principles of Modelling

© 2024 chempedia.info