Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Parsimony criterion

Numerical taxonomy in its cladistic phase became animated through the real progress in systematic theory over the past two decades. .. the discoveries that phenetics is false that parsimony [or compatibility, or likelihood] is essential that exact solutions to parsimony problems [or compatibility problems, or likelihood problems] are possible (Farris and Platnick, 1989 308). Animation flowed from the belief that (Farris, 1968 9) perhaps the most impressive theoretical advantage of the parsimony criterion is that it is certain to give a correct tree, provided the data consist of a sufficient array of non-convergent characters, and from similar beliefs, with similar provisos, that compatibility, or maximum likelihood, might alternatively — even certainly — give a correct tree. [Pg.127]

A total of 5462 positions were aligned, of which 2730 were gapped positions that were excluded due to ambiguity. Of the remaining sites, 1648 were constant sites, and 493 were phylogenetically informative sites using the parsimony criterion. The 28S data... [Pg.140]

The most commonly used techniques for estimating trees for sequences may be grouped into three categories (1) distance methods, (2) maximum parsimony, and (3) maximum likelihood based methods. There are other methods but they are not widely used. Further, each of these categories covers many variations and even distinct methods with different properties and assumptions. These methods have often been divided different ways (different from the three categories here) such as cladistic versus phenetic, character-based versus non-character-based, method-based versus criterion-based, and others. These divisions may merely reflect particular predjudices by the person making them and can be artificial. [Pg.121]

First of all, the decision must be made whether and where models are to be apphed and what types of model (e.g., detailed, parsimonious) could be used. The most important selection criterion is the required accuracy of the results if there is demand for very accurate and detailed model results, a more sophisticated model has to be applied, and relevant data have to be collected accordingly (Hpjberg et al., 2006). Important aspects should be uncertainty assessment and quality assurance. [Pg.188]

Maximum Parsimony (MP). Maximmn parsimony is an optimization criterion that adheres to the principle that the best explanation of the data is the simplest, which in turn is the one requiring the fewest ad hoc assumptions. In practical terms, the MP tree is the shortest—the one with the fewest changes—which, by definition, is also the one with the fewest parallel changes. There are several variants of MP that differ with regard to the permitted directionality of character state change (Swofford et al., 1996). [Pg.343]

Eqs. 63 — 68 reveal a typical dilemma in Hansch analysis while eqs. 65 — 68 are significantly better than eq. 63 and are based on more reasonable assumptions than eq. 64, which one of them is the best equation On the basis of the correlation coefficients r (the crutches of a QSAR beginner), eq. 67 is to be preferred eq. 66 seems to be the best one if the standard deviation s, a much better criterion, is considered. The differences in the correlation coefficients r and in the standard deviations s of eqs. 65—68 are rather small. However, if one applies the principle of parsimony, eqs. 66 and 67 should be omitted because too many parameters are included for such a small data set. [Pg.61]

There are several important philosophical differences between distance-based phylogenetic methods and character-based parsimony which should not be overlooked. First is the unappealing property of distance-based analyses that all information on evolutionary change is averaged into one number for each pair of taxa. Second, to paraphrase Swofford and Olsen (1990), the as stamptions involved in distance methods (such as additivity and clock-like evolution) are rarely evident or discussed and the justification of the algorithm itself often seems to be the objective of the study. On the contrary, in the case of methods will a well-defined optimality criterion such as parsimony, the objective is usually related to a (more or less) concrete set of assumptions. [Pg.52]

Alternatively, criteria can be estimated for each model based on the principle of parsimony, that is, all else being equal, select the simplest model. The Akaike Information Criterion (AIC) is one of the most widely used information criterion that combines the model error sum of squares and the number of parameters in the model. [Pg.272]

This two-block predictive PLS regression has been found very satisfactory for multivariate calibration and many other types of practical multivariate data analysis. This evaluation is based on a composite quality criterion that includes parsimony, interpretability, and flexibility of the data model lack of unwarranted assumptions wide range of applicability good predictive ability in the mean square error sense computational speed good outlier warnings and an intuitively appealing estimation principle. See, for example. Reference 6, Reference 7, and References 15-17. [Pg.197]

Appendix 6B Data matrix used for this cladistic analysis which contained 7 taxa, 13 characters 6 trees each 20 steps long consistency index (Cl) = 0.85 homoplasy index (HI) = 0.15 Cl excluding uninformative characters = 0.81 HI excluding uninformative characters = 0.19 retention index (Rl) = 0.73 rescaled consistency index (RCI) = 0.62 unrooted tree(s) rooted using outgroup method optimality criterion = parsimony. All characters are unordered and equally weighted character-state optimisation ACCTRAN. [Pg.147]

The selection of variables could separate relevant information from unwanted variability and at the same time allows data compression, that is more parsimonious models, simplification or improvement of model interpretation, and so on. Although many approaches can be used for features selection, in this work, a wavelet-based supervised feature selection/classification algorithm, WPTER [12], was applied. The best performing model was obtained using a daubechies 10 wavelet, a maximum decomposition level equal to 10, between-class/within-class variance ratio criterion for the thresholding operation and the percentage of selected coefficients equal to 2%. Six wavelet coefficients were selected, belonging to the 4th, 5th, 6th, 8th, and 9th levels of decomposition. [Pg.401]


See other pages where Parsimony criterion is mentioned: [Pg.518]    [Pg.42]    [Pg.86]    [Pg.86]    [Pg.219]    [Pg.143]    [Pg.518]    [Pg.42]    [Pg.86]    [Pg.86]    [Pg.219]    [Pg.143]    [Pg.379]    [Pg.44]    [Pg.444]    [Pg.460]    [Pg.183]    [Pg.58]    [Pg.104]    [Pg.104]    [Pg.297]    [Pg.305]    [Pg.5]    [Pg.40]    [Pg.63]    [Pg.63]    [Pg.151]    [Pg.152]    [Pg.672]    [Pg.108]    [Pg.253]    [Pg.77]    [Pg.149]    [Pg.350]    [Pg.208]    [Pg.121]    [Pg.197]    [Pg.403]   
See also in sourсe #XX -- [ Pg.104 ]




SEARCH



Parsimony

© 2024 chempedia.info