Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Define Uncertainty Tolerance

The analyst must always be concerned with the different viewpoints of expert assessors in the regulatory agencies or other bodies to whom the analytical data will eventually [Pg.465]


Tolerance verification it defines inspection planning and metrological procedures for functional requirements, functional specifications, and manufacturing specifications. It is very important to consider tolerance verification early in the design activities to be able to assess uncertainties. Tolerance verification permits to close the process loop, to check the product conformity, and to verify assumptions made by the designer. [Pg.1232]

As a result of the limitations of construction and survey equipment, the realised fill level will always show some deviation from the specified height. Achieving accuracies less than 0.05 m with standard land reclamation equipment is very difficult. Deviations should be acceptable to a certain level, also because of the uncertainties in settlement predictions. Claiming millimetre-precision for earthworks is unrealistic. Technical Specifications should therefore allow for pre-defined construction tolerances. If necessary, a distinction in scale can be made by defining macro- and micro tolerances. [Pg.420]

When setting the goals of a measurement project, it has to be asked, What exactly has to be determined. What are the final quantities required and what is the inaccuracy that can be tolerated in these quantities Only when these factors are known can an analysis be made, where the quantities to be measured and the measurement accuracy of each quantity are defined. This analysis is based on the mea surement method selected, and on the computation of measurement uncertainties. Usually the analysis of measurement uncertainties is made after monitoring however, making it beforehand is part of good planning practice. This approach ensures that the correct information with the desired accuracy is achieved. [Pg.1120]

When appropriate chemical-specific data are available, a CSAF can be used to replace the relevant default sub-factor for example, suitable data defining the difference in target organ exposure in animals and humans could be used to derive a CSAF to replace the uncertainty sub-factor for animal to human differences in toxicokinetics (a factor of 4). The overall UF would then be the value obtained on multiplying the CS AF(s), used to replace default sub-factor(s), by the remaining default sub-factor(s) for which suitable data were not available. In this way, chemical-specific data in one area could be introduced quantitatively into the derivation of a tolerable intake, and data would replace uncertainty. [Pg.225]

The RI is defined as the largest total uncertainty which the HEN can tolerate while remaining feasible ... [Pg.23]

Often, estimates of exposure are compared directly with benchmark doses or concentrations (i.e. those that result in a critical effect of defined increase in incidence, such as 5% or 10%). Alternatively, they are compared with either a lowest-observed-adverse-effect level (LOAEL), the lowest concentration that leads to an adverse effect, or no-observed-adverse-effect level (NOAEL), the highest concentration that does not lead to an adverse effect, or their equivalents. This results in a margin of safety or margin of exposure . Alternatively, estimates of exposure are compared with tolerable or reference concentrations or doses, which are based on the division of benchmark doses and/or concentrations or the NOAELs or LOAELs by factors that account for uncertainties in the available data. [Pg.10]

Although the advice is given mainly for the description of risk assessment results, it holds completely for exposure assessment, since the quantitative input for risk assessment is exposure assessment and uncertainty analysis. Since any description of the resulting exposure distribution(s) in terms such as very low , low , fair , moderate , high or extreme includes an evaluation, it must be defined and justified (EnHealth Council, 2004). Those communicating the results of an exposure assessment frequently use comparative reporting schemes, such as the 50%/majority/. .. /95% of data/measurements/individuals show exposure values/estimates/measurements lower than the tolerable daily intake... [Pg.75]

The SA may also provide rationale for simplifying a model. This may occur if the outcome is shown to be robustly tolerant to wide ranges of parameter uncertainty, which may allow for the removal of some parameters and thus lead to a more parsimonious simulation model. Conversely, the SA may reveal that insufficient information currently exists to define a precise or reliable range of trial outcomes. In this latter case, either more time may be required to obtain additional informative experimental data and thus reduce the uncertainty to an acceptable range, or separate sets of plausible assumptions may need to be considered and subsequently tested for their own sensitivity. Such decisions need buy-in from the subject matter experts and should be considered in the full context of the development program. [Pg.889]

Six sigma Six sigma is one of the more recent popular approaches to QA that is based on a tight statistical approach to the production of a product. The name arises from a desire to limit the tolerance of a product to plus or minus six standard deviations and thus have only 3.4 defects per million. (This is the fraction outside - - 4.5 standard deviations from the mean the method allows for some measurement uncertainty.) In order for the statistics to hold, the system must be in statistical control and the defects must be random and normally distributed. There is a heavy reliance on control charts and the system is built around what to do if there is evidence for nonconformity. For a nonconforming product six sigma institutes an approach with the acronym DMAIC — define, measure, analyze, improve, control. This has been implemented in some organizations, such as pharmaceutical companies, which produce large volumes of chemicals. However, strict statistical control of chemical products is not always easy, and considerations of the measurement process also needs to be taken into account. [Pg.3983]

In the next step, standard uncertainties are defined for each source of uncertainty. GUM defines two different methods for estimating uncertainty. Type A is a method of evaluation by the statistical analysis of series of observations. Standard deviations can be calculated through repeated observations. Type B is a method of evaluation of uncertainty by means other than the statistical analysis of series of observations. Calibration results or tolerances given in manuals can be used here. They are usually expressed in the form of limits or confidence intervals. Typical rules for converting such information to an estimated standard uncertainty u are introduced in [5] (p. 164). [Pg.611]

NOTE 2 The minimum hardware fault tolerance has been defined to alleviate potential shortcomings in SIF design that may result due to the number of assumptions made in the design of the SIF, along with uncertainty in the failure rate of components or subsystems used in various process applications. [Pg.59]

Figure 11.13 Elliptical region around a point plotted in CIELAB space. The region defines the uncertainty or tolerance an observer would perceive no difference in color among spectra that plot.within the ellipsoidal region. Figure 11.13 Elliptical region around a point plotted in CIELAB space. The region defines the uncertainty or tolerance an observer would perceive no difference in color among spectra that plot.within the ellipsoidal region.
The nuclear hot factors are used to consider the power distribution in the core. It consists of three hot factors, radial, local, and axial nuclear. The radial nuclear enthalpy rise hot factor is defined as the ratio of the hot assembly power to the average assembly power. The local nuclear enthalpy rise hot factor is defined as the ratio of the hot fuel rod power to the hot assembly average power. Finally, the axial nuclear enthalpy rise hot factor is defined as the ratio of the maximum axial plane power to the average plane power. In the full statistical treatment, the nuclear enthalpy rise hot factor is not an absolute value. It also varies around the nominal value with given tolerance. The uncertainty of the nuclear enthalpy rise hot factor is mainly induced by neutronic calculatirai errors. A typical error of 2% for each component of the hot factors is considered. If the normal distribution is assumed, the standard deviation of each hot factor is 1% of the nominal value. Table 7.22 [1] shows the uncertainties of the nuclear enthalpy rise hot factors considered here. [Pg.501]


See other pages where Define Uncertainty Tolerance is mentioned: [Pg.465]    [Pg.466]    [Pg.541]    [Pg.465]    [Pg.466]    [Pg.541]    [Pg.24]    [Pg.410]    [Pg.251]    [Pg.157]    [Pg.39]    [Pg.203]    [Pg.20]    [Pg.315]    [Pg.113]    [Pg.53]    [Pg.774]    [Pg.597]    [Pg.99]    [Pg.1170]    [Pg.201]    [Pg.67]    [Pg.188]    [Pg.125]    [Pg.157]    [Pg.92]    [Pg.15]    [Pg.413]    [Pg.187]    [Pg.644]    [Pg.31]    [Pg.185]    [Pg.99]    [Pg.198]    [Pg.148]    [Pg.489]   


SEARCH



Defining Tolerance

Tolerance defined

Uncertainty tolerance

© 2024 chempedia.info