Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Estimation errors statistical

Ideally, the results should be validated somehow. One of the best methods for doing this is to make predictions for compounds known to be active that were not included in the training set. It is also desirable to eliminate compounds that are statistical outliers in the training set. Unfortunately, some studies, such as drug activity prediction, may not have enough known active compounds to make this step feasible. In this case, the estimated error in prediction should be increased accordingly. [Pg.248]

The maximum search function is designed to locate intensity maxima within a limited area of x apace. Such information is important in order to ensure that the specimen is correctly aligned. The user must supply an initial estimate of the peak location and the boundary of the region of interest. Points surrounding this estimate are sampled in a systematic pattern to form a new estimate of the peak position. Several iterations are performed until the statistical uncertainties in the peak location parameters, as determined by a linearized least squares fit to the intensity data, are within bounds that are consistent with their estimated errors. [Pg.150]

In Fig. 3.5A a comparison between time-gated detection and TCSPC is shown. The time-gated detection system was based on four 2 ns wide gates. The first gate opened about 0.5 ns after the peak of the excitation pulse from a pulsed diode laser. The TCSPC trace was recorded using 1024 channels of 34.5 ps width. The specimen consisted of a piece of fluorescent plastic with a lifetime of about 3.8 ns. In order to compare the results, approximately 1700-1800 counts were recorded in both experiments. The lifetimes obtained with TG and TCSPC amounted to 3.85 0.2 ns and 3.80 0.2 ns respectively, see Fig. 3.5B. Both techniques yield comparable lifetime estimations and statistical errors. [Pg.116]

It is recommended to characterize these errors in order to estimate the statistical relevance of the measurement. Importantly, relative estimates are usually less prone to errors and may offer higher sensitivities. For example, in FRET-FLIM experiments the ratio of the donor lifetime in the absence and presence of an acceptor is measured. This offers a higher precision than absolute lifetime values. [Pg.132]

As discussed in Section 6.8, the estimation errors can be categorized as statistical, bias, and discretization. In a well designed MC simulation, the statistical error will be controlling. In contrast, in FV methods the dominant error is usually discretization. [Pg.347]

In reference 69, results were analyzed by drawing response surfaces. However, the data set only allows obtaining flat or twisted surfaces because the factors were only examined at two levels. Curvature cannot be modeled. An alternative is to calculate main and interaction effects with Equation (3), and to interpret the estimated effects statistically, for instance, with error estimates from negligible effects (Equation (8)) or from the algorithm of Dong (Equations (9), (12), and (13)). Eor the error estimation from negligible effects, not only two-factor interactions but also three- and four-factor interactions could be used to calculate (SE)e. [Pg.213]

If an experiment is repeated a great many times and if the errors are purely random, then the results tend to cluster symmetrically about the average value (Figure 4-1). The more times the experiment is repeated, the more closely the results approach an ideal smooth curve called the Gaussian distribution. In general, we cannot make so many measurements in a lab experiment. We are more likely to repeat an experiment 3 to 5 times than 2 000 times. However, from the small set of results, we can estimate the statistical parameters that describe the large set. We can then make estimates of statistical behavior from the small number of measurements. [Pg.53]

Table VI shows the concentrations of plutonium, neptunium, and uranium measured at the inlet and outlet of the unaltered and hydrothermally-altered basalt core fissures in the first five analog experiments (see Table I) Under conditions simulating a repository that was unaltered by groundwater interaction (Table I, Exp 1-3), both Np and Pu, in the concentrations developed in these analog experiments from the leaching of the waste form, were substantially retarded within the 14.6-cm basalt fissure In fact, as can be seen from Figure 4, almost all of Np activity was sorbed on the first one-third of the rock fissure The data in Figure 4 have an estimated error, based on counting statistics above, of approximately 2 counts per 1000 seconds Uranium retardation was determined to be not as complete ... Table VI shows the concentrations of plutonium, neptunium, and uranium measured at the inlet and outlet of the unaltered and hydrothermally-altered basalt core fissures in the first five analog experiments (see Table I) Under conditions simulating a repository that was unaltered by groundwater interaction (Table I, Exp 1-3), both Np and Pu, in the concentrations developed in these analog experiments from the leaching of the waste form, were substantially retarded within the 14.6-cm basalt fissure In fact, as can be seen from Figure 4, almost all of Np activity was sorbed on the first one-third of the rock fissure The data in Figure 4 have an estimated error, based on counting statistics above, of approximately 2 counts per 1000 seconds Uranium retardation was determined to be not as complete ...
Probability distribution models can be used to represent frequency distributions of variability or uncertainty distributions. When the data set represents variability for a model parameter, there can be uncertainty in any non-parametric statistic associated with the empirical data. For situations in which the data are a random, representative sample from an unbiased measurement or estimation technique, the uncertainty in a statistic could arise because of random sampling error (and thus be dependent on factors such as the sample size and range of variability within the data) and random measurement or estimation errors. The observed data can be corrected to remove the effect of known random measurement error to produce an error-free data set (Zheng Frey, 2005). [Pg.27]

This method of estimating the errors ciki in the parameters, ki9 is based on ideal behavior, e.g., perfect initial concentrations, disturbed only by white noise in the measurement. Experience shows that the estimated errors tend to be smaller than those determined by statistical analysis of several measurements fitted individually. [Pg.235]

Points 1 to 3 explicitly define the assessment of compliance. Point 1 is what many scientists would regard as the standard, but it is only a part of the standard. Points 2 and 3 deal with the requirement that the limit value needs to be used for 2 purposes to estimate the measures needed to correct or prevent failure and to assess compliance in an unbiased manner (perhaps in a way that enables comparisons between regions or nations). Both of these tasks require standards defined as summary statistics that can be estimated using statistical methods in an unbiased manner, thus allowing the quantification of statistical errors. Generally, this means that the standards must be expressed as summary statistics such as annual averages and annual percentiles. [Pg.38]

Random error can over- or underestimate risk and is generally not as severe as bias. Moreover, the magnitude of error can be estimated with statistical techniques. Assessment of confounding, synergism, or effect modification can be accomplished in the analytical phase (by stratification or multivariate modeling), providing sufficient data have been collected on those factors. Restriction or randomization procedures also can be used in the design phase to minimize confounders. [Pg.230]

This part of the chapter is concerned with the evaluation of nncertainties in data and in calculated results. The concepts of random errors/precision and systematic errors/accuracy are discussed. Statistical theory for assessing random errors in finite data sets is summarized. Perhaps the most important topic is the propagation of errors, which shows how the error in an overall calculated result can be obtained from known or estimated errors in the input data. Examples are given throughout the text, the headings of key sections are marked by an asterisk, and a convenient summary is given at the end. [Pg.38]

The uncertainty values calculated for the two-parameter model are quoted only for illustrative purposes, since we have good reason to believe from our statistical tests that they are not to be relied upon. Note that they are much smaller than the uncertainty values calculated for the three-parameter model. Note also that the differences between the two models in the values of oq and are far outside the limits of error. Had we lazily accepted the two-parameter fit and its estimated error Uinits and attached physical significance to either of the two parameters, we would have badly deceived ourselves. On the other hand, in view of the excellent statistical performance of the three-parameter fit, we may accept the parameter values and the estimated uncertainty values of the three-parameter model with considerable confidence. [Pg.684]

An uncertainty analysis involves the determination of the variation of imprecision in an output function based on the collective variance of model inputs. One of the five issues in uncertainty analysis that must be confronted is how to distinguish between the relative contribution of variability (i.e. heterogeneity) versus true certainty (measurement precision) to the characterization of predicted outcome. Variability refers to quantities that are distributed empirically - such factors as soil characteristics, weather patterns and human characteristics - which come about through processes that we expect to be stochastic because they reflect actual variations in nature. These processes are inherently random or variable, and cannot be represented by a single value, so that we can determine only their moments (mean, variance, skewness, etc.) with precision. In contrast, true uncertainty or model specification error (e.g. statistical estimation error) refers to an input that, in theory, has a single value, which cannot be known with precision due to measurement or estimation error. [Pg.140]

The use of the terms upper bound and worst-case refer to the expectations that this approach is likely to be highly conservative and will not underestimate potential risk. These terms are not meant to connote that statistical analysis to estimate error bounds would be performed, or that additional safety factors (traditional for extrapolation to acceptable daily intake values for non-carcinogens) would be incorporated into the extrapolation. [Pg.166]

To estimate the statistical errors in the estimated density of states, 7 independent simulations are conducted, with exactly the same code, but with different random-number generator seeds. The calculated density of states always contains an arbitrary multiplier the estimated densities of states from these 7 runs are matched by shifting each In f2 E) in such a way as to minimize the total variance. [Pg.76]


See other pages where Estimation errors statistical is mentioned: [Pg.1991]    [Pg.1993]    [Pg.2287]    [Pg.2288]    [Pg.1991]    [Pg.1993]    [Pg.2287]    [Pg.2288]    [Pg.307]    [Pg.362]    [Pg.267]    [Pg.35]    [Pg.320]    [Pg.360]    [Pg.371]    [Pg.640]    [Pg.95]    [Pg.76]    [Pg.31]    [Pg.42]    [Pg.442]    [Pg.10]    [Pg.2282]    [Pg.176]    [Pg.201]    [Pg.91]    [Pg.329]    [Pg.179]    [Pg.192]    [Pg.299]    [Pg.111]    [Pg.39]    [Pg.179]    [Pg.34]    [Pg.190]    [Pg.165]    [Pg.133]   
See also in sourсe #XX -- [ Pg.295 , Pg.298 , Pg.300 , Pg.301 , Pg.306 , Pg.328 , Pg.329 , Pg.332 , Pg.337 , Pg.338 , Pg.341 , Pg.342 , Pg.352 , Pg.353 , Pg.359 ]

See also in sourсe #XX -- [ Pg.295 , Pg.298 , Pg.300 , Pg.301 , Pg.306 , Pg.328 , Pg.329 , Pg.332 , Pg.337 , Pg.338 , Pg.341 , Pg.342 , Pg.352 , Pg.353 , Pg.359 ]




SEARCH



Error estimate

Error estimating

Error estimation

Estimated error

Statistical error

Statistical estimation

Statistics errors

© 2024 chempedia.info