Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Standard error reliability estimates

Reliability of the model requires that the model be assessed for the uncertainty of parameters and random effects. We are interested in the standard errors of estimated parameters and random effects in the model. The uncertainty should be small for parameters, uncertainty should be less than 25% of the relative standard error and for random effects, it should be less than 35% of the relative standard error (25). [Pg.236]

The mean of several readings (x) will make a more reliable estimate of the true mean (yu) than is given by one observation. The greater the number of measurements (n), the closer will the sample average approach the true mean. The standard error of the mean sx is given by ... [Pg.136]

Two models of practical interest using quantum chemical parameters were developed by Clark et al. [26, 27]. Both studies were based on 1085 molecules and 36 descriptors calculated with the AMI method following structure optimization and electron density calculation. An initial set of descriptors was selected with a multiple linear regression model and further optimized by trial-and-error variation. The second study calculated a standard error of 0.56 for 1085 compounds and it also estimated the reliability of neural network prediction by analysis of the standard deviation error for an ensemble of 11 networks trained on different randomly selected subsets of the initial training set [27]. [Pg.385]

Shear cell measurements offer several pieces of information that permit a better understanding of the material flow characteristics. Two parameters, the shear index, n, and the tensile strength, S, determined by fitting simplified shear cell data to Eq. (6), are reported in Table 2. Because of the experimental method, only a poor estimate of the tensile strength is obtained in many cases. The shear index estimate, however, is quite reliable based on the standard error of the estimate shown in parenthesis in Table 2. The shear index is a simple measure of the flowability of a material and is used here for comparison purposes because it is reasonably reliable [50] and easy to determine. The effective angle of internal... [Pg.302]

Snow, especially its water-soluble fraction, is one of the most sensitive and informative indicators of mass-transfer in the chain air - soil - drinking water. Therefore analytical data on snow-melt samples were selected for inter-laboratory quality control. Inter-laboratory verification of analytical results estimated in all the groups have shown that relative standard errors for the concentrations of all the determined elements do not exceed (5-15)% in the concentration range 0.01 - 10000 microg/1, which is consistent with the metrological characteristics of the methods employed. All analytical data collected by different groups of analysts were tested for reliability and... [Pg.139]

From equations 13 16, the standard error for each measurement as a function of the elution time can be obtained. Additional propagation of these errors through the Integration across the chromatogram results in estimates of the errors associated with the SEC calculation of the average polymer properties. Therefore, it enables reliable statistical comparisons between SEC estimates and static measurements... [Pg.225]

Small values of this standard error indicate high reliability it is likely that the observed value, — X2, for the treatment effect is close to the true treatment effect, jLi — JL2- In contrast, a large value for the standard error tells us that — X2 is not a reliable estimate of [Xj — 1X2-... [Pg.38]

More generally, whatever statistic we are interested in, there is always a formula that allows us to calculate its standard error. The formulas change but their interpretation always remains the same a small standard error is indicative of high precision, high reliability. Conversely a large standard error means that the observed value of the statistic is an unreliable estimate of the true (population) value. It is also always the case that the standard error is an estimate of the standard deviation of the list of repeat values of the statistic that we would get were we to repeat the sampling process, a measure of the inherent sampling variability. [Pg.38]

When we are dealing with samples rather than populations, we cannot use the standard normal deviate, Z, to make predictions since this requires knowledge of the population mean and variance or standard deviation. In general, we do not know the value of these parameters. However, provided the sample is a random one, its mean 5 is a reliable estimate of the population mean p, and we can use the central limit theorem to provide an estimate of o. This esti mate, known as the standard error of the mean, is given by ... [Pg.302]

The relative standard error (RSE) is a measure of the reliability of a statistic. The smaller the RSE, the more precise the estimate. [Pg.54]

In the calculation of total pressure and vapor composition from boiling point data using the indirect method, the greatest source of error lies in the liquid-phase composition. We have attempted to characterize the frequency distribution of the error in the calculated vapor composition by the standard statistical methods and this has given a satisfactory result for the methanol- vater system saturated with sodium chloride when the following estimates of the standard deviation were used x, 0.003 y, 0.006 T, 0.1° C and tt, 2 mm Hg. This work indicates that in the design of future experiments more data points are required and, for each variable, a reliable estimate of the standard deviation is highly desirable. [Pg.47]

Most practical exercises are based on a limited number of individual data values (a sample) which are used to make inferences about the population from which they were drawn. For example, the lead content might be measured in blood samples from 100 adult females and used as an estimate of the adult female lead content, with the sample mean (T) and sample standard deviation (j) providing estimates of the true values of the underlying population mean (/r) and the population standard deviation (c). The reliability of the sample mean as an estimate of the true (population) mean can be assessed by calculating the standard error of the sample mean (often abbreviated to standard error or SE), from ... [Pg.268]

On many occasions, sample statistics are used to provide an estimate of the population parameters. It is extremely useful to indicate the reliability of such estimates. This can be done by putting a confidence limit on the sample statistic. The most common application is to place confidence limits on the mean of a sample from a normally distributed population. This is done by working out the limits as F— ( />[ i] x SE) and F-I- (rr>[ - ij x SE) where //>[ ij is the tabulated critical value of Student s t statistic for a two-tailed test with n — 1 degrees of freedom and SE is the standard error of the mean (p. 268). A 95% confidence limit (i.e. P = 0.05) tells you that on average, 95 times out of 100, this limit will contain the population... [Pg.278]

Given two estimates of a statistic, one from a sample of size n and the other from a sample of size 2n, one might expect that the estimate from the larger sample would be more reliable than that from the smaller sample. This is, in fact, supported by statistical theory. If the variance in the population is cr, then the variance of the sample mean for samples of size n is a jn. The square root of this is the standard error of the mean. Consistent with the variance of the sample mean being /n times that of a single determination (cr ), the standard deviation and the CV% of the sample mean are reduced by the square root of n. As a direct consequence, an assay method that relies on the mean of two independent concentration determinations has a CV / /2 that of the same method based on a single determination. This provides an easy way to increase the precision (reduce variability) of a method. An example of this is found in radioimmunoassay in which it is common for a concentration estimate to be calculated from the mean response of two determinations of a specimen. [Pg.3484]

A necessary condition for the validity of a regression model is that the multiple correlation coefficient is as close as possible to one and the standard error of the estimate s small. However, this condition (fitting ability) is not sufficient for model validity as the models give a closer fit (smaller s and larger R ) the larger the number of parameters and variables in the models. Moreover, unfortunately, these parameters are not related to the capability of the model to make reliable predictions on future data. [Pg.461]

The variation of the estimated standard errors of the heat capacity and energy of activation with temperature is illustrated in Figs. 2 and 3. The curves demonstrate that activation parameters are obtained most reliably in the vicinity of the mean temperature of the experimental range where the errors are at a minimum, and that quite substantial inaccuracies may be involved at temperatures well outside this region. It can also be seen that temperature changes have a much more pronounced effect on the errors when these changes also alter A than when... [Pg.133]

Two practical concerns of the critics of use of HRQOL assessments in individual patient care are 1) respondent burden and 2) reliability of scores obtained from shorter questionnaires. Current researchers struggle with the competing demands invoked by everyday use requiring shorter forms and the reliability of a result obtained from fewer questions. Specifically, concerns are raised about the reliability of the result and the interpretation. With popular outcomes measures, the standard error around a single person estimate is large and not satisfying enough to ensure stable conclusions. [Pg.424]

To summarize, the computational aspects of confidence intervals involve a point estimate of the population parameter, some error attributed to sampling, and the amount of confidence (or reliability) required for interpretation. We have illustrated the general framework of the computation of confidence intervals using the case of the population mean. It is important to emphasize that interval estimates for other parameters of interest will require different reliability factors because these depend on the sampling distribution of the estimator itself and different calculations of standard errors. The calculated confidence interval has a statistical interpretation based on a probability statement. [Pg.74]

In Chapter 6 we described the basic components of hypothesis testing and interval estimation (that is, confidence intervals). One of the basic components of interval estimation is the standard error of the estimator, which quantifies how much the sample estimate would vary from sample to sample if (totally implausibly) we were to conduct the same clinical study over and over again. The larger the sample size in the trial, the smaller the standard error. Another component of an interval estimate is the reliability factor, which acts as a multiplier for the standard error. The more confidence that we require, the larger the reliability factor (multiplier). The reliability factor is determined by the shape of the sampling distribution of the statistic of interest and is the value that defines an area under the curve of (1 - a). In the case of a two-sided interval the reliability factor defines lower and upper tail areas of size a/2. [Pg.103]

When a model is used for descriptive purposes, goodness-of-ht, reliability, and stability, the components of model evaluation must be assessed. Model evaluation should be done in a manner consistent with the intended application of the PM model. The reliability of the analysis results can be checked by carefully examining diagnostic plots, key parameter estimates, standard errors, case deletion diagnostics (7-9), and/or sensitivity analysis as may seem appropriate. Conhdence intervals (standard errors) for parameters may be checked using nonparametric techniques, such as the jackknife and bootstrapping, or the prohle likelihood method. Model stability to determine whether the covariates in the PM model are those that should be tested for inclusion in the model can be checked using the bootstrap (9). [Pg.226]

Furthermore, when alternative approaches are applied in computing parameter estimates, the question to be addressed here is Do these other approaches yield similar parameter and random effects estimates and conclusions An example of addressing this second point would be estimating the parameters of a population pharmacokinetic (PPK) model by the standard maximum likelihood approach and then confirming the estimates by either constructing the profile likelihood plot (i.e., mapping the objective function), using the bootstrap (4, 9) to estimate 95% confidence intervals, or the jackknife method (7, 26, 27) and bootstrap to estimate standard errors of the estimate (4, 9). When the relative standard errors are small and alternative approaches produce similar results, then we conclude the model is reliable. [Pg.236]

Numerical methods used to fit experimental data should, ideally, give parameter estimates that are unbiased with reliable estimates of precision. Therefore, determining the reliability of parameter estimates from simulated PPK studies is an absolute necessity since it may affect study outcome. Not only should bias and precision associated with parameter estimation be determined but also the confidence with which these parameters are estimated should be examined. Confidence interval estimates are a function of bias, standard error of parameter estimates, and the distribution of parameter estimates. Use of an informative design can have a significant impact on increasing precision. Paying attention to these measures of parameter estimation efficiency is critical to a simulation study outcome (6, 7). [Pg.305]

M. H. Quenouille introduced the jackknife (JKK) in 1949 (12) and it was later popularized by Tukey in 1958, who first used the term (13). Quenouille s motivation was to construct an estimator of bias that would have broad applicability. The JKK has been applied to bias correction, the estimation of variance, and standard error of variables (4,12-16). Thus, for pharmacometrics it has the potential for improving models and has been applied in the assessment of PMM reliability (17). The JKK may not be employed as a method for model validation. [Pg.402]

One estimate for the accuracies of the molecular constants can be obtained from the statistical standard deviations. However, experience shows that these estimates are invariably too low, owing to the neglect of systematic errors. More reliable estimates can be found by two methods. In the first, several different bands with one state in common are analyzed independently and the results compared. Thus values for the ground-state rotational constants of glyoxal-d2 obtained from the independent analyses of five bands are reproduced in Table 3. It is seen that the values forA", B , and C are consistent to 0.00025, 0.00004, and 0.00004 cm"1, respectively, whereas the standard deviations of these constants obtained from the analysis of the 0-0 band are 0.00004, 0.000008, and 0.000009 cm"1, respectively. In this example a realistic estimate for the accuracy is roughly five times the standard deviation. [Pg.123]

The positional parameters derived in the usual crystal structure determination represent the coordinates of the first moments of the distribution of instantaneous atomic centers as produced by zero point and thermal displacements from an equilibrium configuration. The distance between pairs of first-moment positions, together with a measure of precision derived in a straightforward way from the estimated standard errors of the parameters, constitutes a raw distance, and its precision measure is as reliable as are those of the positional parameters. This is the quantity usually quoted by crystallographers as raw or uncorrected interatomic distance. [Pg.221]

The standard error of the estimate 5delds information concerning the reliability of the values predicted by the regression equation. The greater the standard error of the estimate, the less reliable the predicted values. [Pg.177]

Uncertainty in the linear regression is estimated by determining the standard errors of adjustable parameters and predictions from the linear model. With a reliable estimate of the variance of the response variable, o, the known value is used for uncertainty predictions. Otherwise, the variance is estimated in terms of the sum of square residuals. The residual at each measurement is defined as... [Pg.237]


See other pages where Standard error reliability estimates is mentioned: [Pg.119]    [Pg.34]    [Pg.923]    [Pg.140]    [Pg.122]    [Pg.218]    [Pg.520]    [Pg.203]    [Pg.157]    [Pg.4]    [Pg.4]    [Pg.97]    [Pg.140]    [Pg.58]    [Pg.299]    [Pg.4]    [Pg.130]    [Pg.368]    [Pg.395]    [Pg.403]    [Pg.308]    [Pg.66]   
See also in sourсe #XX -- [ Pg.50 ]




SEARCH



Error estimate

Error estimating

Error estimation

Errors standardization

Estimated error

Standard Error

© 2024 chempedia.info