Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Evaluating Uncertainty

As we have seen in previous sections, the result of a measurement is not complete unless an estimate of the uncertainty associated with the result is available. In any measurement procedure, there will be a number of aspects of the procedure that will contribute to the uncertainty. Uncertainty arises due to the presence of both random and systematic errors. To obtain an estimate of the uncertainty in a result, we need to identify the possible sources of uncertainty, obtain an estimate of their magnitude and combine them to obtain a single value which encompasses the effect of all the significant sources of error. This section introduces a systematic approach to evaluating uncertainty. [Pg.162]

The uncertainty evaluation process can be broken down into four stages specification, identification, quantification and combination. [Pg.162]

To allow the uncertainty to be evaluated effectively, a model equation describing the method of analysis is required. The starting point is the equation used to calculate the final result. Intially, we will need to consider the uncertainties associated with the parameters that appear in this equation. It may be necessary to add terms to this equation (i.e. expand the model) to include other parameters that may influence the final result and therefore contribute to the measurement uncertainty. [Pg.162]

It is also essential to have a clear understanding of the analyte or property being measured. For example, an analyst may be studying the amount of lead present in paint used on toys. One possibility would be to use a method which determines the total amount of lead present. Alternatively, the analyst may be interested in the amount of lead that is released from a paint sample taken from a toy when it has been extracted with a stomach-acid simulant. In both cases, the end measurement is the same - the concentration of lead in a solution. However, the results from the two approaches would be very different. In the first case, the sample will have been digested with a strong acid solution which should release all of the lead present in the sample. In the second case, we would expect the results to be lower as the method is designed to estimate the amount of lead released under particular conditions. The second type of method is sometimes referred to as an empirical method. This is a method where the result produced is entirely dependent on the analytical method. In the above example, if the [Pg.162]

When evaluating uncertainty, it is important to understand the distinction between empirical and non-empirical methods, as this influences how the uncertainty is evaluated. In the case of non-empirical methods, any bias in the results which is due to the method of analysis or, for example, a particular sample type, needs to be considered as part of the uncertainty evaluation process. For example, if a method was intended to determine the total amount of cadmium present in a soil sample, but for some reason only 90% of the cadmium present was extracted from the sample, then this 10% bias would need to be accounted for in the uncertainty estimate. One approach would be to correct results to take account of the bias. However, there would be an uncertainty associated with the correction as there will be some uncertainty about the estimate of the bias. For empirical methods, the method bias is, by definition, equal to zero (the method defines the result obtained). However, when evaluating the uncertainty associated with results obtained from an empirical method, we still need to consider the uncertainty associated with any bias introduced by the laboratory during its application of the method. One approach is to analyse a reference material that has been characterized by using the same empirical method. If no suitable reference material is available, then any bias associated with carrying out the individual stages of the method in a particular laboratory will need to be evaluated. [Pg.163]


Frey, H.C. and E.S. Rubin, Evaluate Uncertainties in Advanced Process Technologies, Chemical Engineeiing Piogiess, May 1992, 6.3-70. (Uncertainty evaluation)... [Pg.2545]

Measurement Error Uncertainty in the interpretation of unit performance results from statistical errors in the measurements, low levels of process understanding, and differences in unit and modeled performance (Frey, H.C., and E. Rubin, Evaluate Uncertainties in Advanced Process Technologies, Chemical Engineering Progress, May 1992, 63-70). It is difficult to determine which measurements will provide the most insight into unit performance. A necessary first step is the understanding of the measurement errors hkely to be encountered. [Pg.2563]

Qualitative analysis methods should have well-grounded and generally adopted quantitative reliability estimations. At first the problem was formulated by N.P. Komar in 1955. Its actuality increased when test methods and identification software systems (ISS) entered the market. Metrological aspects evolution for qualitative analysis is possible only within the scope of the uncertainty theory. To estimate the result reliability while detecting a substance X it is necessary to calculate both constituents of uncertainty the probability of misidentifications and the probability of unrevealing for an actual X. There are two mutual complementary approaches to evaluate uncertainties in qualitative analysis, just as in quantitative analysis ... [Pg.24]

The simplest and most appropriate way to evaluate uncertainty is by using sensitivity analysis on the risk assessments. Sensitivity to a parameter is defined as the change in risk measure per unit change in that parameter (Ref. 76). [Pg.38]

This chapter deals with handling the data generated by analytical methods. The first section describes the key statistical parameters used to summarize and describe data sets. These parameters are important, as they are essential for many of the quality assurance activities described in this book. It is impossible to carry out effective method validation, evaluate measurement uncertainty, construct and interpret control charts or evaluate the data from proficiency testing schemes without some knowledge of basic statistics. This chapter also describes the use of control charts in monitoring the performance of measurements over a period of time. Finally, the concept of measurement uncertainty is introduced. The importance of evaluating uncertainty is explained and a systematic approach to evaluating uncertainty is described. [Pg.139]

The application of fuzzy logic to the risk assessment of the use of solvents in order to evaluate the uncertainties affecting both individual and societal risk estimates is an area with relevance to the present considerations (Bonvicini et al., 1998). In evaluating uncertainty by fuzzy logic, fuzzy numbers describe the uncertain input parameters and calculations are performed using fuzzy arithmetic the outputs will also be fuzzy numbers. The results of these considerations work are an attempt to justify some of the questions the use of fuzzy in the field of risk analysis stimnlates. [Pg.45]

The evaluated uncertainty of this substitution, i.e. the uncertainty of K, will limit the uncertainty of the measurement result of the unknown sample and must be taken into account as a contribution to the total uncertainty budget of the result obtained on the unknown sample since ... [Pg.67]

At the conclusion of the study, surplus test materials are offered for sale as reference materials for quality control and method development purposes. These materials have been studied for homogeneity and stability and have been assigned values for specific analytes. The more recently produced materials (since July 1999) have traceable assigned values with rigorously evaluated uncertainties. Older materials have consensus-based assigned values. [Pg.119]

The steps considered when evaluating uncertainty measurement components in clinical analyses are illustrated in Fig. 5. [Pg.188]

The composition of the case-study includes a conceptual model, the modelling approach, construction of input distributions and variance propagation methods. When evaluating uncertainty, it is important to consider how each of these elements contributes to overall uncertainty. [Pg.119]

Kadis, R. Evaluating uncertainty in analytical measurements the pursuit correctness. Accred. Qual. Assur. 3, 237-241 (1998)... [Pg.25]

Evaluate uncertainty and sensitivity analysis of the relative rankings. [Pg.391]

Evaluate uncertainty and sensitivity analysis of the relative rankings. Uncertainty needs to be accounted for and tracked in the risk assessment process. At times it may be an accounting process, listing factors that introduced uncertainty into the assessment process. At other times the uncertainty can be represented by a distribution process and a Monte Carlo process employed to provide a range of values. [Pg.395]

The step-by-step approach recommended in the ISO Guide and the top down approach have been seen as alternative and substantially different ways of evaluating uncertainty, but the comparison between method development protocols and ISO approach above shows that they are more similar than appears at first sight. In particular, both require a careful consideration and study of the main effects on the result to obtain robust results accounting properly for each contribution to overall uncertainty. However, the top down approach relies on that study being carried out during method development to make use of the data in ISO GUM estimations, the detailed data from the study must be available. [Pg.39]

The case presented here is simple because all six samples can be analysed as one set of samples in one analytical run. For a single run the estimation of uncertainty can be reduced to the calibration of the method with an in-house reference material and the uncertainty of the analysis itself. In this example inhomogeneity of the sample, the evaluation uncertainty and the uncertainty of the reference material (i.e. inhomogeneity) as given in Table 1 can be neglected because the results of only one analysis are compared. The weighing uncertainties can be neglected because they are small compared to the sampler uncertainty. [Pg.77]

This is true of the component-by-com-ponent ( bottom-up ) method for evaluating uncertainty that is directly in line with GUM. Also this is true for the top-down approach [20] that provides a valuable alternative when poorly understood steps are involved in the CMP and a full mathematical model is lacking. An important point is that the top-down methodology implies a reconciliation of information available with the required one that is based on a detailed analysis of the factors which affect the result. For both approaches to work advantageously a clear specification of the analytical procedure is evidently a necessary condition. [Pg.150]

Positive programmatic and pedagogical value of "evaluative uncertainty"... [Pg.154]

The poor statistics seem to call for a different philosophy of interpretation of the evaluated uncertainties. There is increasing attention to the Bayesian statistics [19] and standing discussions take place on the relative merits of various approaches and the philosophy behind them. Modem statistical practice is dominated by two... [Pg.197]

Despite the success of the canonical model in fitting the solar s-nuclide distribution, some of its basic assumptions deserve questioning. This concerns in particular a presumed exponential form for the distribution of the neutron exposures r, which has been introduced by [33] in view of their mathematical ease in abundance calculations. In addition, the canonical model makes it difficult in the s-nuclide abundance predictions to evaluate uncertainties of nuclear or observational nature. As a result, the concomitant uncertainties in the solar r-abundances are traditionally not evaluated. The shortcomings of the canonical model are cured to a large extent by the so-called multi-event s-process model (MES) [37], In view of the importance to evaluate the uncertainties affecting the solar distribution of the abundances of the r-nuclides, we review the MES in some detail. A similar multi-event model has also been developed for the r-process (MER), and is presented in [38]. [Pg.298]

The lack of methods to evaluate uncertainties in modeling results. [Pg.170]

Most reliability engineers use a Chi-Squared probability distribution for the uncertainty. The Chi-Squared distribution is a special case of the Gamma distribution and is appropriate for evaluating uncertainty (Ref. 7, Appendix 5, and Ref 8, Chapter 5). Given a point estimate of the failure... [Pg.38]

The calibration hierarchy is the sequence of calibrations from a reference to the final measuring system, where the outcome of each calibration depends on the outcome of the previous calibration [13]. This hierarchy requires that for measurements incorporating more than one input quantity in the measurement model (e.g., pH, 7), each input quantity must itself be metrologically traceable. In addition, each measurement and derived quantity is listed with an evaluated uncertainty that captures the uncertainties of the measurements and of the calibration hierarchy. Also, because the propagation of variances is additive, measurement uncertainty increases throughout the calibration hierarchy fiomthe RM (which is ideally a certified reference material aka CRM) to the sample. A statement describing the uncertainty is essential, as a measured quantity value unaccompanied by a measurement uncertainty is not only useless, but it is potentially dangerous because the measured value may be misinterpreted or misused. [Pg.57]

After the set of altematives is defined it s necessary to evaluate uncertainties regarding each alternative (action) and its possible states of nature. For this stage... [Pg.1013]

Fig. 7.72 Plots corresponding to cyclic fatigue data in Fig. 7.71 for sandblasted alumina and Y-TZP of thickness 1.5 mm, but in terms of critical loads instead of stress and for dentin-like substrate with intervening dental cement of thickness 100 pm, using Eqs. (7.17), (7.18), and (7.19) to convert the data. Ninety-five percent confidence bounds are used to evaluate uncertainties in sustainable loads at long lifetimes, tR = 10 years. Shaded band indicates nominal oral function range [35]. With kind permission of Elsevier... Fig. 7.72 Plots corresponding to cyclic fatigue data in Fig. 7.71 for sandblasted alumina and Y-TZP of thickness 1.5 mm, but in terms of critical loads instead of stress and for dentin-like substrate with intervening dental cement of thickness 100 pm, using Eqs. (7.17), (7.18), and (7.19) to convert the data. Ninety-five percent confidence bounds are used to evaluate uncertainties in sustainable loads at long lifetimes, tR = 10 years. Shaded band indicates nominal oral function range [35]. With kind permission of Elsevier...
Supplement 1 to the GUM defines the Monte Carlo method as a method for the propagation of distributions by performing random sampling from probability distributions [8]. This method is particularly suitable for models that cannot be linearized or solved with classical methods. How the Monte Carlo method can be used to evaluate uncertainty is explained briefly in [9]. As discussed above, the first step involves defining the dependence of the output quantity as a function of all possible input quantities. This results in the measurement equation Eq. (22.8). After the measurement equation has been formulated, m random samples are generated for each input quantity with the help of a random number generator and probability distribution functions (PDFs) pi,m-... [Pg.612]


See other pages where Evaluating Uncertainty is mentioned: [Pg.110]    [Pg.162]    [Pg.219]    [Pg.45]    [Pg.783]    [Pg.65]    [Pg.265]    [Pg.1792]    [Pg.2319]    [Pg.4544]    [Pg.4556]    [Pg.4556]    [Pg.77]    [Pg.77]    [Pg.100]    [Pg.313]    [Pg.610]    [Pg.612]   


SEARCH



Evaluation of Maximum Cladding Surface Temperature with Engineering Uncertainties

Isotope uncertainty evaluation

Uncertainty evaluation

Uncertainty evaluation

Use of uncertainty analysis in evaluation and validation

© 2024 chempedia.info