Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Generalized Statistical Sampling

As one of the possible ways to alter the sampling distribution in a manner that is conducive to enhanced sampling, we present a strategy based on probability distributions that arise in a generalization of statistical mechanics proposed by Tsallis [31]. In this [Pg.283]

The standard Metropolis MC corresponds to the q = 1 limit, in which case the probability of accepting a new configuration of the system is [Pg.284]

Also note that the definition of the effective potential V in (8.14) enables one to conceive a constant-temperature molecular dynamics method (instead of MC) to generate the Tsallis distributions. Given this effective potential, it is possible to define a constant-temperature molecular dynamics algorithm such that the distribution Pq(x) is sampled in the trajectory. The equation of motion takes on the simple and suggestive form [Pg.284]

Enhanced sampling in conformational space is not only relevant to sampling classical degrees of freedom. An additional reason to illustrate this particular method is that the delocalization feature of the underlying distribution in Tsallis statistics is useful to accelerate convergence of calculations in quantum thermodynamics [34], We focus on a related method that enhances sampling for quantum free energies in Sect. 8.4.2. [Pg.285]

In the thermodynamic limit, the Tsallis updating scheme has the form [Pg.286]


Using this expression, the standard 5=1 equilibrium average properties may be calculated over a trajectory which samples the generalized statistical distribution for 5 7 1 with the advantage of enhanced sampling for g > 1. [Pg.202]

In effect, the standard deviation quantifies the relative magnitude of the deviation numbers, i.e., a special type of average of the distance of points from their center. In statistical theory, it turns out that the corresponding variance quantities s have remarkable properties which make possible broad generalities for sample statistics and therefore also their counterparts, the standard deviations. [Pg.488]

Machin et al. (1997) provide extensive tables in relation to sample size calculations and include in their book the formulas and many examples. In addition there are several software packages specifically designed to perform power and sample size calculations, namely nQuery (www.statsol.ie) and PASS (www.ncss.com). The general statistics package SPLUS (www.insightful.com) also contains some of the simpler calculations. [Pg.133]

A detailed overview on references for sampling by HANNAPPEL [1994] contains not only references for general, statistical, and detailed aspects, but also for norms of the International Organization of Standardization in Geneva. [Pg.97]

General statistical methods such as sample size estimation, determination of practical significance and one-sided testing can be applied to the paired f-test in the same manner that we have already seen for the two-sample f-test. [Pg.144]

The objective of the study is to determine the nature and extent of pseudomorphs on Spl by mapping, with photomicrography and statistical sampling techniques. The population to be studied consisted of all pseudomorphs after fabric located on the spearpoint. A survey previously conducted with microscopy identified the general location of the mineralized fabrics. A research hypothesis, derived from the survey of evidence, governed the study and is as follows Pseudomorphs after fabric located on Spl are fragments of an unbalanced plain-weave type of fabric. [Pg.455]

For statistical samples of small volume, an increase in the order of the polynomial regression of variables can produce a serious increase in the residual variance. We can reduce the number of the coefficients from the model but then we must introduce a transcendental regression relationship for the variables of the process. From the general theory of statistical process modelling (relations (5.1)-(5.9)) we can claim that the use of these types of relationships between dependent and independent process variables is possible. However, when using these relationships between the variables of the process, it is important to obtain an excellent ensemble of statistical data (i.e. with small residual and relative variances). [Pg.362]

Population (1) All of the people living at a place or in a region. An archaeological population generally refers to the people related through membership in the same group. (2) All of the items or units of interest in statistical sampling. [Pg.271]

When biomacromolecular embedding is considered, such as protein matrices and DNA structures, the discrete formulation is instead to be preferred in those cases a detailed and atomistic description of the macromo-lecular environment is necessary, in order to obtain accurate descriptions of the molecular process of interest. Moreover, for these systems, accurate force fields are generally available. Within this framework, the QM/MM approach is commonly used in combination with MD simulations to both achieve a proper statistical sampling and to account for the effects of fluctuations. Commonly the MD simulations are performed at a fiilly classical level (especially if the systems are large and the time-windows to be explored are long). In the 2010s, however, QM/MM-MD are also becoming feasible for small-medium QM systems for time windows of the order of tens to hundreds ofps. ... [Pg.229]

Discussions were held with several safety directors to determine how they would respond to estimates of 50% of the number of workers compensation claims and 60% of total claims costs being ergonomics-related. There was general agreement that those estimates were sound, but two cautions were expressed Estimates are applicable if the statistical sample is large enough and variations by industry could be significant. [Pg.340]

The variation between individuals in such DNA profiles is so great that we are unlikely, in any reasonable sample size, to see the same pattern duplicated in unrelated individuals. Thus we cannot arrive empirically at an estimate of the low probability of two DNA profiles matching by chance. We have to use statistical models to estimate this probability, based on reasonable genetic assumptions and measurements that can be derived from the available data. The quantities normally used to measure DNA fingerprint information are x, the bandsharing, which is the proportion of bands shared by unrelated individuals in the population, and n, which is the mean number of scorable bands detected in the profiles. An inaccurate but generally statistically conservative assumption that is often made is that x is constant for all... [Pg.159]


See other pages where Generalized Statistical Sampling is mentioned: [Pg.283]    [Pg.211]    [Pg.283]    [Pg.211]    [Pg.405]    [Pg.170]    [Pg.52]    [Pg.200]    [Pg.149]    [Pg.5]    [Pg.343]    [Pg.226]    [Pg.192]    [Pg.197]    [Pg.788]    [Pg.403]    [Pg.6]    [Pg.371]    [Pg.419]    [Pg.41]    [Pg.65]    [Pg.493]    [Pg.30]    [Pg.405]    [Pg.160]    [Pg.441]    [Pg.41]    [Pg.16]    [Pg.44]    [Pg.488]    [Pg.4317]    [Pg.4318]    [Pg.5094]    [Pg.2241]   


SEARCH



Sample statistic

Samples statistic sample

Statistical sampling

© 2024 chempedia.info