Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Statistics point estimation

There are a variety of ways to express absolute QRA results. Absolute frequency results are estimates of the statistical likelihood of an accident occurring. Table 3 contains examples of typical statements of absolute frequency estimates. These estimates for complex system failures are usually synthesized using basic equipment failure and operator error data. Depending upon the availability, specificity, and quality of failure data, the estimates may have considerable statistical uncertainty (e.g., factors of 10 or more because of uncertainties in the input data alone). When reporting single-point estimates or best estimates of the expected frequency of rare events (i.e., events not expected to occur within the operating life of a plant), analysts sometimes provide a measure of the sensitivity of the results arising from data uncertainties. [Pg.14]

In frequentist statistics, by contrast, nuisance parameters are usually treated with point estimates, and inference on the parameter of interest is based on calculations with the nuisance parameter as a constant. This can result in large errors, because there may be considerable uncertainty in the value of the nuisance parameter. [Pg.322]

I the descriptive approach, which typically utilises only the point estimates of the appropriate statistical parameter and compares to the pre-defined acceptance limits (Boulanger et al., 2003 Hartmann et al., 1998). Typical acceptance limits are 2% for the relative bias and 3% for the RSDIP (Bouabidi et al., 2010),... [Pg.28]

ML is the approach most commonly used to fit a distribution of a given type (Madgett 1998 Vose 2000). An advantage of ML estimation is that it is part of a broad statistical framework of likelihood-based statistical methodology, which provides statistical hypothesis tests (likelihood-ratio tests) and confidence intervals (Wald and profile likelihood intervals) as well as point estimates (Meeker and Escobar 1995). MLEs are invariant under parameter transformations (the MLE for some 1-to-l function of a parameter is obtained by applying the function to the untransformed parameter). In most situations of interest to risk assessors, MLEs are consistent and sufficient (a distribution for which sufficient statistics fewer than n do not exist, MLEs or otherwise, is the Weibull distribution, which is not an exponential family). When MLEs are biased, the bias ordinarily disappears asymptotically (as data accumulate). ML may or may not require numerical optimization skills (for optimization of the likelihood function), depending on the distributional model. [Pg.42]

An approach that is sometimes helpful, particularly for recent pesticide risk assessments, is to use the parameter values that result in best fit (in the sense of LS), comparing the fitted cdf to the cdf of the empirical distribution. In some cases, such as when fitting a log-normal distribution, formulae from linear regression can be used after transformations are applied to linearize the cdf. In other cases, the residual SS is minimized using numerical optimization, i.e., one uses nonlinear regression. This approach seems reasonable for point estimation. However, the statistical assumptions that would often be invoked to justify LS regression will not be met in this application. Therefore the use of any additional regression results (beyond the point estimates) is questionable. If there is a need to provide standard errors or confidence intervals for the estimates, bootstrap procedures are recommended. [Pg.43]

Notice that, while the Monte Carlo simulation produces point estimates, the bounding analyses yield intervals for the various measures. The intervals represent sure bounds on the respective statistics. They reveal just how unsure the answers given by the Monte Carlo simulation actually were. If we look in the last column with no assumption, for instance, we see that the variance might actually be over 6 times larger than the Monte Carlo simulation estimates. [Pg.104]

The mortality data were eompared for field and eultured heart urchins using two way ANO-VA. Survival data were statistically analyzed in ToxCalc using the Trimmed Spearman-Karber point estimate test to determine the lethal concentration LC i,. [Pg.60]

Valuable information on whether the structure is homogeneous or inhomogeneous can also be obtained by analyzing the network formation process. A shift of experimental and estimated statistical parameters (Mw, gel point conversion, sol fraction, etc.) will be observed if inhomogeneities are formed as a result of the crosslinking process. [Pg.221]

The best point estimate depends upon the criteria by which we judge the estimate. Statistics provides many possible ways to estimate a given population parameter, and several properties of estimates have been defined to help us choose which is best for our purposes. [Pg.31]

Note that when more than 85% of the drug is dissolved from both products within 15 minutes, dissolution profiles may be accepted as similar without further mathematical evaluation. For the sake of completeness, one should add that some concerns have been raised regarding the assessment of similarity using the direct comparison of the fi and /2 point estimates with the similarity limits [140-142], Attempts have been made to bring the use of the similarity factor /2 as a criterion for assessment of similarity between dissolution profiles in a statistical context using a bootstrap method [141] since its sampling distribution is unknown. [Pg.112]

PK data The PK parameters of ABC4321 in plasma were determined by individual PK analyses. The individual and mean concentrations of ABC4321 in plasma were tabulated and plotted. PK variables were listed and summarized by treatment with descriptive statistics. An analysis of variance (ANOVA) including sequence, subject nested within sequence, period, and treatment effects, was performed on the ln-transformed parameters (except tmax). The mean square error was used to construct the 90% confidence interval for treatment ratios. The point estimates were calculated as a ratio of the antilog of the least square means. Pairwise comparisons to treatment A were made. Whole blood concentrations of XYZ1234 were not used to perform PK analyses. [Pg.712]

Confidence intervals (CIs) are usually the preferred means for evaluating chance in epidemiological studies because they are more informative than P values. CIs are the interval (usually 95%) around the risk estimate (which is a point estimate) that represent the upper and lower values of the true risk estimate. CIs provide information on both the precision of the point estimate and statistical significance. Wide confidence intervals indicate that there is a high degree of uncertainty on the accuracy of the point estimate and is usually a result of small study populations. CIs that exclude 1 means that there is a 95% probability that the null hypothesis is not operating. [Pg.615]

As described by the UK Pesticide Safety Directorate, the NESTI method basically is a point estimate [8]. The NESTI methodology calculates the dietary exposure using high-end consumption and the highest residue value, adjusted by a variability factor, to account for potential variability between individual commodity units. Nonetheless, the calculated NESTI statistic is a point estimate. [Pg.361]

Tabulated data for experimental adsorption isotherms are fitted with analytical equations for the calculation of thermodynamic properties by integration or differentiation. These thermodynaunic properties expressed as a function of temperature, pressure, and composition are input to process simulators of atdsorption columns. In addition, anaJytical equations for isotherms are useful for interpolation and cautious extrapolation. Obviously, it is desirable that the Isotherm equations agree with experiment within the estimated experimental error. The same points apply to theoretical isotherms obtained by molecular simulation, with the requirement that the analytical equations should fit the isotherms within the estimated statistical error of the molecular simulation. [Pg.44]

Let 0 denote the vector of parameters for the current model. A point estimate, 6, with locally maximum posterior probability density in the parameter space, is obtained by minimizing a statistical objective function... [Pg.217]

Statistical methods frequently employed in effluent toxicity evaluations include point estimation technique such as probit analysis, and hypothesis testing like Dunnett s analysis of variance (anova). Point estimation technique enables the investigator to derive a quantitative dose-response relationship. This method has been generally applied to statistical analyses of acute effluent monitoring data. [Pg.963]

In addition, it is very useful to estimate the difference between the population parameters using the difference between the sample statistics. So, the actual difference seen in the data gives a point estimate of the difference. In the data shown in Table 7.12 in which P = 0-017, the difference in percentage success between the two samples is 16-0% (82-7% - 66-7%), which gives a point estimate of the true difference. [Pg.383]

The term Zp in the numerator is the normal deviate for the power it is wished to have in the study. If a power of 95% is desired, Zp is 1-645. This is a one-tailed 95% normal deviate because a Type II error occurs only if the difference in the data is too small to be statistically significant. A difference which, by chance, is larger than the true difference will not give a Type II error, because it will be statistically significant (although the point estimate of the true difference will be an overestimate). If a power of 90% is wished, Za is 1-28, and for a power of 80%, Za is 0-84. [Pg.385]

To summarize, the computational aspects of confidence intervals involve a point estimate of the population parameter, some error attributed to sampling, and the amount of confidence (or reliability) required for interpretation. We have illustrated the general framework of the computation of confidence intervals using the case of the population mean. It is important to emphasize that interval estimates for other parameters of interest will require different reliability factors because these depend on the sampling distribution of the estimator itself and different calculations of standard errors. The calculated confidence interval has a statistical interpretation based on a probability statement. [Pg.74]

Using PK differences between males and females as an example, suppose the sex difference was considered not statistically significant from Phase 1 and 2 data. However, it may be that the evaluated effect was not powered appropriately to rule out a clinically significant difference from the available data. For example, if the resulting 90% Cl for the covariate parameter estimate was not well defined (e.g., outside [0.8,1.25]), then a no effect conclusion may not be the most appropriate assumption at this juncture. Therefore, if differences between males and females are of interest to the trial or program outcome, retention of this covariate parameter in the model would be advised. A sensitivity analysis (see Section 35.3.1) assessing the influence of a PK sex effect may be conducted. This would determine whether the point estimate of the effect influences the simulation outcome and how its precision may lend to overall uncertainty in the results. [Pg.885]


See other pages where Statistics point estimation is mentioned: [Pg.71]    [Pg.54]    [Pg.228]    [Pg.334]    [Pg.233]    [Pg.505]    [Pg.222]    [Pg.510]    [Pg.393]    [Pg.395]    [Pg.8]    [Pg.143]    [Pg.33]    [Pg.111]    [Pg.186]    [Pg.192]    [Pg.202]    [Pg.25]    [Pg.964]    [Pg.2791]    [Pg.228]    [Pg.375]    [Pg.618]    [Pg.33]    [Pg.984]    [Pg.305]    [Pg.109]   


SEARCH



Point estimation

Point statistics

Statistical estimation

© 2024 chempedia.info