Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sampling statistical

Using this concept, Burdett developed a method in 1955 to obtain the concentrations in mono-, di- and polynuclear aromatics in gas oils from the absorbances measured at 197, 220 and 260 nm, with the condition that sulfur content be less than 1%. Knowledge of the average molecular weight enables the calculation of weight per cent from mole per cent. As with all methods based on statistical sampling from a population, this method is applicable only in the region used in the study extrapolation is not advised and usually leads to erroneous results. [Pg.56]

The small statistical sample leaves strong fluctuations on the timescale of the nuclear vibrations, which is a behavior typical of any detailed microscopic dynamics used as data for a statistical treatment to obtain macroscopic quantities. [Pg.247]

For fluids, this is computed by a statistical sampling technique, such as Monte Carlo or molecular dynamics calculations. There are a number of concerns that must be addressed in setting up these calculations, such as... [Pg.112]

How many samples are taken can be of importance. One sample often suffices where it is known that the material in question is homogeneous for the parameter(s) to be tested, such as for pure gases or bulk solvents. If this is not the case, then statistical sampling should be considered. Samples should be taken from various points within the material, if the material stratifies. [Pg.367]

The quantity of sample required comprises two parts the volume and the statistical sample size. The sample volume is selected to permit completion of all required analytical procedures. The sample size is the necessary number of samples taken from a stream to characterize the lot. Sound statistical practices are not always feasible either physically or economically in industry because of cost or accessibiUty. In most sampling procedures, samples are taken at different levels and locations to form a composite sample. If some prior estimate of the population mean, and population standard deviation. O, are known or may be estimated, then the difference between that mean and the mean, x, in a sample of n items is given by the following ... [Pg.298]

Sample reduction in successive stages—primaiy to secondaiy, secondary to tertiary etc.—can be fulfilled using automatic sampling equipment while observing design principles of statistical sampling. Alternatively, sample quantity reduction may be carried out in a lab-oratoiy. [Pg.1761]

The nuclear equipment failure rate database has not changed markedly since the RSS and chemical process data contains information for non-chemical process equipment in a more benign environment. Uncertainty in the database results from the statistical sample, heterogeneity, incompleteness, and unrepresentative environment, operation, and maintenance. Some PSA.s use extensive studies of plant-specific data to augment the generic database by Bayesian methods and others do not. No standard guidance is available for when to use which and the improvement in accuracy that is achieved thereby. Improvements in the database and in the treatment of data requires, uhstaiui.il indu.sinal support but it is expensive. [Pg.379]

Hungerford, J. M., and Christian, G. D., Statistical Sampling Errors as Intrinsic Limits on Detection in Dilute Solutions, Anal. Chem. 58, 1986, 2567-2568. [Pg.404]

To account for inhomogeneity in bubble sizes, d in Eq. (20-52) should be taken as / Ln,dffLn,d, and evaluated at the top of the vertical column if coalescence is significant in the rising foam. Note that this average d for overflow differs from that employed earlier for S. Also, see Bubble Sizes regarding the correction for planar statistical sampling bias and the presence of size segregation at a wall. [Pg.34]

How many products will be collected (statistical sample size), i.e., will the tail of the distribution need to be defined, or will the mean sufficiently address the issue of concern ... [Pg.234]

Hpp describes the primary system by a quantum-chemical method. The choice is dictated by the system size and the purpose of the calculation. Two approaches of using a finite computer budget are found If an expensive ab-initio or density functional method is used the number of configurations that can be afforded is limited. Hence, the computationally intensive Hamiltonians are mostly used in geometry optimization (molecular mechanics) problems (see, e. g., [66]). The second approach is to use cheaper and less accurate semi-empirical methods. This is the only choice when many conformations are to be evaluated, i. e., when molecular dynamics or Monte Carlo calculations with meaningful statistical sampling are to be performed. The drawback of semi-empirical methods is that they may be inaccurate to the extent that they produce qualitatively incorrect results, so that their applicability to a given problem has to be established first [67]. [Pg.55]

The calculation of the potential of mean force, AF(z), along the reaction coordinate z, requires statistical sampling by Monte Carlo or molecular dynamics simulations that incorporate nuclear quantum effects employing an adequate potential energy function. In our approach, we use combined QM/MM methods to describe the potential energy function and Feynman path integral approaches to model nuclear quantum effects. [Pg.82]

The DFT/MM approach have been applied to study equilibrium properties as well as to study chemical reactions. Several DFT/MM implementations were developed differing in the strategy for approximating the tfMicroEnv and H°licroEnvv terms and in the way the statistical sample of conformations is generated. Below, these implementations will be briefly presented. [Pg.116]

Jarzynski [12], Although this identity is an exact result, statistical sampling problems arise if the transformation moves the system too far from equilibrium. In this section we will explain the origin of these difficulties and show how transition path sampling can be used to overcome them. [Pg.265]

As before, our simulation data are subject to statistical (sampling) uncertainties, which are particularly pronounced near the extremes of particle numbers and energies visited during a run. When data from multiple runs are combined, as shown in Fig. 10.3, the question arises of how to determine the optimal amount by which to shift the raw data in order to obtain a global free energy function. As reviewed in... [Pg.362]

Heat Release Rate From Fuel Gas. The fuel gas used in these tests was a mixture of natural gas supplied by the local gas company. This gas mixture contains approximately 90 percent methane and small fractions of ethane, propane, butane, C02, and nitrogen, as analyzed by Brenden and Chamberlain (6). Although composition of the gas changes with time, the changes were small in our case. A statistical sample of gross heat of combustion of fuel gas over several months showed a coefficient of variation of 0.7 percent. Also, the gross heat of combustion of natural gas reported by the gas company on the day of the test did not vary significantly from test to test. Thus, we assumed that the net heat of combustion was constant. [Pg.420]

A set of replicate results should number at least twenty-five if it is to be a truly representative statistical sample. The analyst will rarely consider it economic to make this number of determinations and therefore will need statistical methods to enable him to base his assessment on fewer data, or data that have been accumulated from the analysis of similar samples. Any analytical problem should be examined at the outset with respect to the precision, accuracy and reliability required of the results. Analysis of the results obtained will then be conveniently resolved into two stages - an examination of the reliability of the results themselves and an assessment of the meaning of the results. [Pg.629]

Continuum models remove the difficulties associated with the statistical sampling of phase space, but they do so at the cost of losing molecular-level detail. In most continuum models, dynamical properties associated with the solvent and with solute-solvent interactions are replaced by equilibrium averages. Furthermore, the choice of where the primary subsystem ends and the dielectric continuum begins , i.e., the boundary and the shape of the cavity containing the primary subsystem, is ambiguous (since such a boundary is intrinsically nonphysical). Typically this boundary is placed on some sort of van der Waals envelope of either the solute or the solute plus a few key solvent molecules. [Pg.3]

Having as necessary, dried, homogenized or comminuted the samples, they must now be digested in a suitable reagent to extract elements in a suitable form for chemical analysis. In many organizations we have reached the point where the analyses pass from the hands of the person who took the sample to those of the analytical chemist. In the author s experience, however, it must be emphasized that to ensure best-quality results the whole procedure from, for example, statistically sampling a sediment to the final chemical analysis, should be handled by the same person. [Pg.443]

Monte Carlo—A statistical technique commonly used to quantitatively characterize the uncertainty and variability in estimates of exposure or risk. The analysis uses statistical sampling techniques to obtain a probabilistic approximation to the solution of a mathematical equation or model. [Pg.234]

Self-consistent approaches in molecular modeling have to strike a balance of appropriate representation of the primary polymer chemistry, adequate treatment of molecular interactions, sufficient system size, and sufficient statistical sampling of structural configurations or elementary transport processes. They should account for nanoscale confinement and random network morphology and they should allow calculating thermodynamic properties and transport parameters. [Pg.421]

As discussed in Section 6.5.3, coarse-grained molecular modeling approaches offer the most viable route to the molecular modeling of hydrated ionomer membranes. The coarse-grained treatment implies simplification in interactions, which can be systematically improved with advanced forcematching procedures, but allows simulations of systems with sufficient size and sufficient statistical sampling. Structural correlations, thermodynamic properties, and transport parameters can be studied. [Pg.421]


See other pages where Sampling statistical is mentioned: [Pg.96]    [Pg.96]    [Pg.405]    [Pg.499]    [Pg.2021]    [Pg.388]    [Pg.94]    [Pg.78]    [Pg.84]    [Pg.83]    [Pg.32]    [Pg.170]    [Pg.157]    [Pg.162]    [Pg.283]    [Pg.391]    [Pg.409]    [Pg.579]    [Pg.521]    [Pg.30]    [Pg.199]    [Pg.628]    [Pg.629]    [Pg.159]    [Pg.368]    [Pg.39]    [Pg.44]   
See also in sourсe #XX -- [ Pg.21 ]




SEARCH



A Population and Sample Statistics

Basic sampling statistics

Bayesian statistics sample size

Distribution, sampling (statistical

Generalized Statistical Sampling

Problem with statistical sampling

Problem with statistical sampling error

Re-sampling statistics

Real samples statistical and hyphenated methods

Sample size statistical process

Sample statistic

Sample statistic

Sample statistical

Sample statistics and population parameters

Samples statistic sample

Samples statistic sample

Sampling environment, statistical

Sampling statistical analysis

Sampling statistical criterion

Sampling statistical theory

Sampling statistical validation

Sampling statistics

Statistical Analyses and Plotting of Control Sample Data

Statistical Aspects of Sample Preparation

Statistical Sampling Method

Statistical analysis sample size

Statistical methods, environmental sampling

Statistical significance, number samples needed

Statistical toxicological samples

Statistical treatment of finite samples

Statistics acceptance sampling

Statistics based on a sample

Statistics of sampling

Statistics sample size

© 2024 chempedia.info