Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistical uncertainties

In this section we have examined the issue of time with respect to the processing and recording of signals and also with regard to statistical uncertainty. These are considerations that are the basis for the optimization of more complex experunents where the time correlation between sets of events or among several different events are sought. [Pg.1422]

Comcidence experiments have been connnon in nuclear physics since the 1930s.The widely used coincidence circuit of Rossi [9] allowed experimenters to detennine, within tire resolution time of the electronics of the day, whether two events were coincident in time. The early circuits were capable of submicrosecond resolution, but lacked the flexibility of today s equipment. The most important distinction between modem comcidence methods and those of the earlier days is the availability of semiconductor memories that allow one to now record precisely the time relations between all particles detected in an experiment. We shall see the importance of tliis in the evaluation of the statistical uncertainty of the results. [Pg.1428]

Figure Bl.10.8. Time spectrum ftom a double coincidence experiment. Tln-ough the use of a delay in the lines of one of the detectors, signals that occur at the same instant in botii detectors are shifted to tlie middle of the time spectrum. Note the unifonn background upon which the true comcidence signal is superimposed. In order to decrease the statistical uncertainty in the detemiination of the true coincidence rate, the background is sampled over a time Aig that is much larger than the width of the true coincidence signal. Ax. Figure Bl.10.8. Time spectrum ftom a double coincidence experiment. Tln-ough the use of a delay in the lines of one of the detectors, signals that occur at the same instant in botii detectors are shifted to tlie middle of the time spectrum. Note the unifonn background upon which the true comcidence signal is superimposed. In order to decrease the statistical uncertainty in the detemiination of the true coincidence rate, the background is sampled over a time Aig that is much larger than the width of the true coincidence signal. Ax.
This minimization can be unweighted as above, or it can be weighted using the statistical uncertainty with respect to the measurements or engineering judgment. [Pg.2573]

First, the parameter estimate may be representative of the mean operation for that time period or it may be representative of an extreme, depending upon the set of measurements upon which it is based. This arises because of the normal fluc tuations in unit measurements. Second, the statistical uncertainty, typically unknown, in the parameter estimate casts a confidence interv around the parameter estimate. Apparently, large differences in mean parameter values for two different periods may be statistically insignificant. [Pg.2577]

There are a variety of ways to express absolute QRA results. Absolute frequency results are estimates of the statistical likelihood of an accident occurring. Table 3 contains examples of typical statements of absolute frequency estimates. These estimates for complex system failures are usually synthesized using basic equipment failure and operator error data. Depending upon the availability, specificity, and quality of failure data, the estimates may have considerable statistical uncertainty (e.g., factors of 10 or more because of uncertainties in the input data alone). When reporting single-point estimates or best estimates of the expected frequency of rare events (i.e., events not expected to occur within the operating life of a plant), analysts sometimes provide a measure of the sensitivity of the results arising from data uncertainties. [Pg.14]

The accuracy of absolute risk results depends on (1) whether all the significant contributors to risk have been analyzed, (2) the realism of the mathematical models used to predict failure characteristics and accident phenomena, and (3) the statistical uncertainty associated with the various input data. The achievable accuracy of absolute risk results is very dependent on the type of hazard being analyzed. In studies where the dominant risk contributors can be calibrated with ample historical data (e.g., the risk of an engine failure causing an airplane crash), the uncertainty can be reduced to a few percent. However, many authors of published studies and other expert practitioners have recognized that uncertainties can be greater than 1 to 2 orders of magnitude in studies whose major contributors are rare, catastrophic events. [Pg.47]

It is important to consider the uncertainties in the consequence evaluation. Many of the events and occurrences modeled in the consequence evaluation are random or highly veiriable processes which can only be described statistically, hence, the results have statistical uncertainties (Section 2.7). Secondly, the models may contain modeling inaccuracies. Finally, to make the evaluation manageable, it is necessary to group by categories, which may be as misleading as it was in the RSS. [Pg.330]

However, the data that are contributed to a generic failure rate data base are rarely for identical equipment and may represent many different circumstances. Generic data must be chosen carefully because aggregating generic and plant-specific data may not improve the statistical uncertainty associated with the final data point, owing to change in tolerance. [Pg.12]

Comment When using data for LOELs, LOAELs, NOELs, or NOAELs, it is important to be aware of their limitations. As discussed in the chapter, statistical uncertainty exists in the determination of these parameters due to the limited number of animals used in the studies to determine the values. However, any toxic effect might be used for the NOAEL and LOAEL so long as it is the most sensitive toxic effect and considered like it to occur in humans. [Pg.343]

The average value of X calculated from (16.9) has a statistical uncertainty cr(X) which is inversely proportional to the square root to the number of sampling points M. [Pg.375]

Schwartz, L. M., Statistical Uncertainties of Analyses by Calibration of Counting Measurements, Anal. Chem. 50, 1978, 980-985. [Pg.408]

As explained In Section 1 three dlffuslvltles were calculated for each system. These were the equilibrium transverse dlffuslvlty and the two nonequilibrium (flow) dlffuslvltles parallel and normal to the direction of flow. As we can see from Table I, they all agree with each other within the limits of statistical uncertainty. We conclude, therefore, that the flow has no effect on the diffusivity even at such high shear rates as the ones employed in our simulation. At even higher shear rates a significant dependence of the dlffuslvlty on the shear rate has been reported (Ifl.) but one should consider that our shear rate Is already orders of magnitude higher than the ones encountered In realistic flow situations. [Pg.275]

LADM Is also shown In Figure 4. It agrees with the simulation profile almost within the limits of the statistical uncertainty. [Pg.279]

Iv) Shear stress and viscosity. As explained In Section 1 three Independent estimates of the shear stress can be made for this particular type of flow. For both systems they all agree within the limits of statistical uncertainty as shown In Table II. The shear stress In the micro pore fluid Is significantly lower than the bulk fluid, which shows that strong density inhomogeneities can induce large changes of the shear stress. [Pg.279]

The denominator in (3.1) can be simplified because the statistical uncertainty of the baseline, hN o, is negligible in practice when the spectra are simulated with numerical line fit routines. The stochastic emission of y-rays by the source leads to a Poisson distribution of counts with the width AA = and since is small, the denominator of (3.1) can be written as ... [Pg.542]

Fig. 22. Comparison between experimental (crosses with error bars indicating la statistical uncertainty) and theoretical integral cross-sections for (a) the H + H2O —> H2 +OH and (b) the H + D2O —> HD + OD reactions. The theoretical cross-sections are also shown enlarged by a factor of 10. Fig. 22. Comparison between experimental (crosses with error bars indicating la statistical uncertainty) and theoretical integral cross-sections for (a) the H + H2O —> H2 +OH and (b) the H + D2O —> HD + OD reactions. The theoretical cross-sections are also shown enlarged by a factor of 10.
A realistic uncertainty interval has to be estimated, namely by considering the statistical deviations as well as the non-statistical uncertainties appearing in all steps of the analytical process. All the significant deviations have to be summarized by means of the law of error propagation see Sect. 4.2. [Pg.242]

Intrinsic disorder might not be encoded by the sequence, but rather might be the result of the absence of suitable tertiary interactions. If this were the general cause of intrinsic disorder, any subset of ordered sequences and any subset of disordered sequences would likely be the same within the statistical uncertainty of the sampling. On the other hand, if intrinsic disorder were encoded by the amino acid sequence, any subset of disordered sequences would likely differ significantly from samples of ordered protein sequences. Thus, to test the hypothesis that disorder is encoded by the sequence, we collected examples of intrinsically ordered and intrinsically disordered proteins, then determined whether and how their sequences were distinguishable. [Pg.49]

The maximum search function is designed to locate intensity maxima within a limited area of x apace. Such information is important in order to ensure that the specimen is correctly aligned. The user must supply an initial estimate of the peak location and the boundary of the region of interest. Points surrounding this estimate are sampled in a systematic pattern to form a new estimate of the peak position. Several iterations are performed until the statistical uncertainties in the peak location parameters, as determined by a linearized least squares fit to the intensity data, are within bounds that are consistent with their estimated errors. [Pg.150]

In contrast to the Gibbs ensemble discussed later in this chapter, a number of simulations are required per coexistence point, but the number can be quite small, especially for vapor-liquid equilibrium calculations away from the critical point. For example, for a one-component system near the triple point, the density of the dense liquid can be obtained from a single NPT simulation at zero pressure. The chemical potential of the liquid, in turn, determines the density of the (near-ideal) vapor phase so that only one simulation is required. The method has been extended to mixtures [12, 13]. Significantly lower statistical uncertainties were obtained in [13] compared to earlier Gibbs ensemble calculations of the same Lennard-Jones binary mixtures, but the NPT + test particle method calculations were based on longer simulations. [Pg.356]

The variations in the background, the sensitivity to moisture, the alpha activity of the chamber itself and the influence of recombination were discussed by Hultqvist. The standard deviation due to counting statistics was estimated to be about 3 % (in a few measurements 6 %). The calibration was made by counting each alpha particle by a proportional counter specially designed at the Department for this purpose. The statistical uncertainty of the calibration of the equivalent radon concentration was estimated to be 12 %. [Pg.91]

Convergence of the FEP results was carefully studied. A typical mutation has 5-10 windows each consisting of 15 M configurations of equilibration and 10 M configurations of averaging. From the batch means procedure, the resulting statistical uncertainty in the computed AAGb values was ca. 0.15 kcal/mol. Two closed perturbation cycles (H —> F Cl - OH... [Pg.305]

The statistical uncertainties in these processes are reflected in "straggling", a term which describes the variations in the number of collisions undergone by any one particle and the amount of energy transferred at each collision. There is a corresponding variation in the penetration or range of particles in a material. One MeV electrons have an average range of 4.3 mm in unit density material. [Pg.15]


See other pages where Statistical uncertainties is mentioned: [Pg.1421]    [Pg.1435]    [Pg.393]    [Pg.14]    [Pg.56]    [Pg.361]    [Pg.298]    [Pg.408]    [Pg.135]    [Pg.136]    [Pg.41]    [Pg.49]    [Pg.236]    [Pg.40]    [Pg.188]    [Pg.365]    [Pg.243]    [Pg.97]    [Pg.335]    [Pg.336]    [Pg.109]    [Pg.231]    [Pg.329]    [Pg.252]    [Pg.65]    [Pg.133]   
See also in sourсe #XX -- [ Pg.25 ]




SEARCH



Compositional analysis statistical uncertainty

Engineering statistics uncertainty, estimation

Statistical definitions Uncertainty value

Statistical material selection uncertainty

Statistical uncertainty assessment

Statistics standard addition uncertainty

© 2024 chempedia.info