Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Computer intensive statistical methods

Urban Hjorth, J.S. (1994). Computer Intensive Statistical Methods, Chapman and Hall, London,... [Pg.451]

Sometimes, the distribution of the statistic must be derived under asymptotic or best case conditions, which assume an infinite number of observations, like the sampling distribution for a regression parameter which assumes a normal distribution. However, the asymptotic assumption of normality is not always valid. Further, sometimes the distribution of the statistic may not be known at all. For example, what is the sampling distribution for the ratio of the largest to smallest value in some distribution Parametric theory is not entirely forthcoming with an answer. The bootstrap and jackknife, which are two types of computer intensive analysis methods, could be used to assess the precision of a sample-derived statistic when its sampling distribution is unknown or when asymptotic theory may not be appropriate. [Pg.354]

As might be surmised, computer intensive statistical analysis methods have become more popular and useful with the advent of modern personal computers having... [Pg.354]

Hpp describes the primary system by a quantum-chemical method. The choice is dictated by the system size and the purpose of the calculation. Two approaches of using a finite computer budget are found If an expensive ab-initio or density functional method is used the number of configurations that can be afforded is limited. Hence, the computationally intensive Hamiltonians are mostly used in geometry optimization (molecular mechanics) problems (see, e. g., [66]). The second approach is to use cheaper and less accurate semi-empirical methods. This is the only choice when many conformations are to be evaluated, i. e., when molecular dynamics or Monte Carlo calculations with meaningful statistical sampling are to be performed. The drawback of semi-empirical methods is that they may be inaccurate to the extent that they produce qualitatively incorrect results, so that their applicability to a given problem has to be established first [67]. [Pg.55]

During the last two or three decades, chemists became used to the application of computers to control their instruments, develop analytical methods, analyse data and, consequently, to apply different statistical methods to explore multivariate correlations between one or more output(s) (e.g. concentration of an analyte) and a set of input variables (e.g. atomic intensities, absorbances). [Pg.244]

Mendes, B. and Tyler, D.E., Constrained M estimates for regression, in Robust Statistics Data Analysis and Computer Intensive Methods, Lecture Notes in Statistics No. 109, Rieder, H., Ed., Springer-Verlag, New York, 1996, pp. 299-320. [Pg.213]

Diaconis, P. and Efron, B. (1983). Computer Intensive Methods in Statistics. ScLAm., 96-108. [Pg.558]

Uncertainties inherent to the risk assessment process can be quantitatively described using, for example, statistical distributions, fuzzy numbers, or intervals. Corresponding methods are available for propagating these kinds of uncertainties through the process of risk estimation, including Monte Carlo simulation, fuzzy arithmetic, and interval analysis. Computationally intensive methods (e.g., the bootstrap) that work directly from the data to characterize and propagate uncertainties can also be applied in ERA. Implementation of these methods for incorporating uncertainty can lead to risk estimates that are consistent with a probabilistic definition of risk. [Pg.2310]

It has been advocated that the area under the ROC curve is a relative measure of a tesfs performance. A Wilcoxon statistic (or equivalently the Mann-Whitney U-Test) statists cally determines which ROC curve has more area under it. Less computationally intensive alternatives, which are no longer necessary, have been described. These methods are particularly helpful when the curves do not intersect. When the ROC curves of two laboratory tests for the same disease intersect, they may offer quite different performances even though the areas under their curves are identical. The performance depends on the region of the curve (i.e., high sensitivity versus high specificity) chosen. Details on how to compare statistically individual points on two curves have been developed elsewhere. ... [Pg.413]

Therefore, various approaches for computing adjusted p values have been applied. These include permutation-adjusted p values, such as MaxT [55], which uses a two-sample Welch /-statistic (unequal variances) with step-down resampling procedures. Typically these adjustedp values are computed with an order of 10,000 permutations, which is computationally intensive. Although these methods are effective with large numbers of replicates, unfortunately this approach is not effective when datasets have small numbers of samples per group [56],... [Pg.143]

The jackknife has a number of advantages. First, the jackknife is a nonparametric approach to parameter inference that does not rely on asymptotic methods to be accurate. A major disadvantage is that a batch or script file will need to be needed to delete the ith observation, recompute the test statistic, compute the pseudovalues, and then calculate the jackknife statistics of course, this disadvantage applies to all other computer intensive methods as well, so it might not be a disadvantage after all. Also, if 9 is a nonsmooth parameter, where the sampling distribution may be discontinuous, e.g., the median, the jackknife estimate of the variance may be quite poor (Pigeot, 2001). For example, data were simulated from a normal distribution with mean 100 and... [Pg.354]

The most computationally intensive step in statistical or dynamical studies based on reaction path potentials is the determination of the MEP by numerical integration of Eq. (2) and the evaluation of potential energy derivatives along the path, so considerable attention should be directed toward doing this most efficiently. Kraka and Dunning [1] have presented a lucid description of many of the available methods for determining the MEP. Simple Euler integration of Eq. [Pg.58]

During the last two or three decades atomic spectroscopists have become used to the application of computers to control their instruments, develop analytical methods, analyse data and, consequently, to apply different statistical methods to explore multivariate correlations between one or more output(s) e.g. concentration of an analyte) and a set of input variables e.g. atomic intensities, absorbances). On the other hand, the huge efforts made by atomic spectroscopists to resolve interferences and optimise the instrumental measuring devices to increase accuracy and precision have led to a point where many of the difficulties that have to be solved nowadays cannot be described by simple univariate linear regression methods (Chapter 1 gives an extensive review of some typical problems shown by several atomic techniques). Sometimes such problems cannot even be addressed by multivariate regression methods based on linear relationships, as is the case for the regression methods described in the previous two chapters. [Pg.367]


See other pages where Computer intensive statistical methods is mentioned: [Pg.341]    [Pg.353]    [Pg.341]    [Pg.353]    [Pg.377]    [Pg.345]    [Pg.247]    [Pg.302]    [Pg.302]    [Pg.317]    [Pg.487]    [Pg.119]    [Pg.177]    [Pg.2796]    [Pg.1022]    [Pg.284]    [Pg.697]    [Pg.225]    [Pg.363]    [Pg.301]    [Pg.51]    [Pg.112]    [Pg.469]    [Pg.122]    [Pg.2286]    [Pg.376]    [Pg.368]    [Pg.174]    [Pg.425]    [Pg.173]    [Pg.313]    [Pg.996]    [Pg.418]    [Pg.128]    [Pg.169]    [Pg.160]    [Pg.156]   
See also in sourсe #XX -- [ Pg.353 ]




SEARCH



Computational methods

Computational statistical method

Computer intensive statistical

Computer methods

Intensity statistics

Intensive Statistical Methods

Statistical methods

© 2024 chempedia.info