Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Errors with

The average error with this method is about 20%. [Pg.92]

More accurately, as the inverse problem process computes a quadratic error with every point of a local area around a flaw, we shall limit the sensor surface so that the quadratic error induced by the integration lets us separate two close flaws and remains negligible in comparison with other noises or errors. An inevitable noise is the electronic noise due to the coil resistance, that we can estimate from geometrical and physical properties of the sensor. Here are the main conclusions ... [Pg.358]

Some density functional theory methods occasionally yield frequencies with a bit of erratic behavior, but with a smaller deviation from the experimental results than semiempirical methods give. Overall systematic error with the better DFT functionals is less than with HF. [Pg.94]

Accuracy Under normal conditions relative errors of 1-5% are easily obtained with UV/Vis absorption. Accuracy is usually limited by the quality of the blank. Examples of the type of problems that may be encountered include the presence of particulates in a sample that scatter radiation and interferents that react with analytical reagents. In the latter case the interferant may react to form an absorbing species, giving rise to a positive determinate error. Interferents also may prevent the analyte from reacting, leading to a negative determinate error. With care, it maybe possible to improve the accuracy of an analysis by as much as an order of magnitude. [Pg.409]

If is available as a function of temperature, Eq. (2-174) can be solved directly for the flash point temperature. Otherwise, trial and error with a table of vs. T is required) Errors average about 5°C but may be as much as 15°C. [Pg.418]

Estimating Change of Sampling Error with Change in... [Pg.1753]

Example 2 Calculation of Error with Doubled Sample Weight, , , , 19-5... [Pg.1753]

Example 2 Calculation of Error with Doubled Sample Weight Repeated measurements from a lot of anhydrous alumina for loss on ignition established test standard error of 0.15 percent for sample weight of 500 grams, noting V is the square of s.e. Calculation of variance V and s.e. for a 1000 gram sample is... [Pg.1757]

The switching-off method for 7/ -free potential measurement is, according to the data in Fig. 3-5, subject to error with lead-sheathed cables. For a rough survey, measurements of potential can be used to set up and control the cathodic protection. This means that no information can be gathered on the complete corrosion protection, but only on the protection current entry and the elimination of cell activity from contacts with foreign cathodic structures. The reverse switching method in Section 3.3.1 can be used to obtain an accurate potential measurement. Rest and protection potentials for buried cables are listed in Table 13-1 as an appendix to Section 2.4. The protection potential region lies within U[[Pg.326]

Figure 19-7 shows off potential measurements as an example, in which the 100-mV criterion, No. 3 in Table 3-3, as well as the potential criterion t/ ff < is fulfilled. It has to be remembered with off potential measurements that according to the data in Fig. 3-6, depolarization is slower with age, so that the 100 mV criterion must lead to errors with a measuring time of 4 hours. Off potential measurements should be carried out after commissioning at 1-, 2-, 6- and 12-month intervals and then annually. [Pg.438]

Errors in advection may completely overshadow diffusion. The amplification of random errors with each succeeding step causes numerical instability (or distortion). Higher-order differencing techniques are used to avoid this instability, but they may result in sharp gradients, which may cause negative concentrations to appear in the computations. Many of the numerical instability (distortion) problems can be overcome with a second-moment scheme (9) which advects the moments of the distributions instead of the pollutants alone. Six numerical techniques were investigated (10), including the second-moment scheme three were found that limited numerical distortion the second-moment, the cubic spline, and the chapeau function. [Pg.326]

The vane anemometer is not seriously affected by small deviations in alignment in the main flow direction. However, care is necessary since over 20° misalignment causes significant errors. With regard to providing a correction for fluid density, slightly different opinions exist. T35 Based on measurements, it is recommended- that the following density correction procedure be applied ... [Pg.1156]

Chapter 1, The Role of Human Error in Chemical Process Safety, discusses the importance of reducing human error to an effective process safety effort at the plant. The engineers, managers, and process plant personnel in the CPI need to replace a perspective that has a blame and punishment view of error with a systems viewpoint that sees error as a mismatch between human capabilities and demands. [Pg.2]

The analysis of accidents and disasters in real systems makes it clear that it is not sufficient to consider error and its effects purely from the perspective of individual human failures. Major accidents are almost always the result of multiple errors or combinations of single errors with preexisting vulnerable conditions (Wagenaar et al., 1990). Another perspective from which to define errors is in terms of when in the system life cycle they occur. In the following discussion of the definitions of human error, the initial focus will be from the engineering and the accident analysis perspective. More detailed consideration of the definitions of error will be deferred to later sections in this chapter where the various error models will be described in detail (see Sections 5 and 6). [Pg.39]

The sociotechnical systems perspective is essentially top-down, in that it addresses the question of how the implications of management policies at all levels in the organization will affect the likelihood of errors with significant consequences. The sociotechnical systems perspective is therefore concerned with the implications of management and policy on system safety, quality, and productivity. [Pg.46]

When performing human reliability assessment in CPQRA, a qualitative analysis to specify the various ways in which human error can occur in the situation of interest is necessary as the first stage of the procedure. A comprehensive and systematic method is essential for this. If, for example, an error with critical consequences for the system is not identified, then the analysis may produce a spurious impression that the level of risk is acceptably low. Errors with less serious consequences, but with greater likelihood of occurrence, may also not be considered if the modeling approach is inadequate. In the usual approach to human reliability assessment, there is little assistance for the analyst with regard to searching for potential errors. Often, only omissions of actions in proceduralized task steps are considered. [Pg.65]

For errors with serious consequences and/or high likelihood of occurrence, develop appropriate error reduction strategies. [Pg.84]

The other main application area for predictive error analysis is in chemical process quantitative risk assessment (CPQRA) as a means of identifying human errors with significant risk consequences. In most cases, the generation of error modes in CPQRA is a somewhat unsystematic process, since it only considers errors that involve the failure to perform some pre-specified function, usually in an emergency (e.g., responding to an alarm within a time interval). The fact that errors of commission can arise as a result of diagnostic failures, or that poor interface design or procedures can also induce errors is rarely considered as part of CPQRA. However, this may be due to the fact that HEA techniques are not widely known in the chemical industry. The application of error analysis in CPQRA will be discussed further in Chapter 5. [Pg.191]

A validation study of the technique showed that it was capable of predicting a high proportion (98%) of errors with serious consequences that actually occurred in an equipment calibration task over a 5-year period (Murgatroyd and Tait, 1987). [Pg.194]

This stage involves representing the structure of the tasks in which errors with severe consequences could occur, in a manner that allows the probabilities of these consequences to be generated. The usual forms of representation are event trees and fault trees. [Pg.209]

A practical advantage of HTA compared with other techniques is that it allows the analysis to proceed to whatever level of detail is appropriate. At each level, the questions can be asked "could an error with serious consequences occur during this operation " If the answer to this question is definitely no, then it is not necessary to proceed with a more detailed analysis. [Pg.212]

For those errors with significant consequences where recovery is unlikely, the qualitative analysis concludes with a consideration of error reduction strategies that will reduce the likelihood of these errors to an acceptable level. These strategies can be inferred directly from the results of the PIF analysis, since this indicates the deficiencies in the situation which need to be remedied to reduce the error potential. [Pg.217]

Because most research effort in the human reliability domain has focused on the quantification of error probabilities, a large number of techniques exist. However, a relatively small number of these techniques have actually been applied in practical risk assessments, and even fewer have been used in the CPI. For this reason, in this section only three techniques will be described in detail. More extensive reviews are available from other sources (e.g., Kirwan et al., 1988 Kirwan, 1990 Meister, 1984). Following a brief description of each technique, a case study will be provided to illustrate the application of the technique in practice. As emphasized in the early part of this chapter, quantification has to be preceded by a rigorous qualitative analysis in order to ensure that all errors with significant consequences are identified. If the qualitative analysis is incomplete, then quanhfication will be inaccurate. It is also important to be aware of the limitations of the accuracy of the data generally available... [Pg.222]

Ihi mcdel chanisfries in this table are arranged in ascending order ef mean absolute deviation. The other columns give the standard de-riotico of the MAD and the absolute value of the maximum error with respect to experiment for each mcdel chemistry. [Pg.157]

A comparison between Gl, G2, G2(MP2) and G2(MP2,SVP) is shown in Table 5.2 for the reference G2 data set the mean absolute deviations in kcal/mol vary from 1.1 to 1.6 kcal/mol. There are other variations of tlie G2 metliods in use, for example involving DFT metliods for geometry optimization and frequency calculation or CCSD(T) instead of QCISD(T), with slightly varying performance and computational cost. The errors with the G2 method are comparable to those obtained directly from calculations at the CCSD(T)/cc-pVTZ level, at a significantly lower computational cost. ... [Pg.166]

As it is apparent from Eqs. (8) and (9), the decay of the errors with the truncation radii in the series (1) and (2) depends only marginally on the energy provided the conditions (7) are verified and it is determined essentially by the incomplete gamma Amctions. Thus, we can impose both the truncation errors to be as close as possible, simply by equating the arguments of both gamma functions. Thus, putting in Eqs. (8) and (9)... [Pg.443]

Ave. % underdesign Errors with safety factor included 0 40 19 10... [Pg.173]

Stores of this size can be built, using standard size factory-made sandwich panels, cutting these to size, jointing and sealing on site. This form of construction is prone to fitting errors, with subsequent failure of the insulation, if not carried out by skilled and experienced craftsmen. The best system can be ruined if the base is uneven or by inexpert finishing of pipe entries, sealing, etc. [Pg.177]


See other pages where Errors with is mentioned: [Pg.775]    [Pg.2219]    [Pg.2832]    [Pg.590]    [Pg.187]    [Pg.103]    [Pg.1757]    [Pg.478]    [Pg.286]    [Pg.71]    [Pg.176]    [Pg.606]    [Pg.12]    [Pg.13]    [Pg.13]    [Pg.194]    [Pg.247]    [Pg.173]    [Pg.646]    [Pg.5]    [Pg.221]    [Pg.138]   
See also in sourсe #XX -- [ Pg.107 ]




SEARCH



Balancing with a gross error

DRG with error propagation

Denoising and compression of data with Gaussian errors

Electronic structure errors associated with

Error, integrated with feedback control

Error, integrated with interacting controllers

Errors Associated with Beers Law Relationships

Errors compared with uncertainties

Errors with Nemst equation

Errors with diffusive systems

Errors with iodine

Exponential Estimator - Issues with Sampling Error and Bias

FIGURE 6.10 Empirical p-box corresponding to a data set with measurement error including 4 nondetect values

Feedforward control error with

Filtering of data with non-Gaussian errors

Hoses errors with

Human error, accidents associated with

Impedance Measurements Integrated with Error Analysis

Linear regression with errors

Measurements with Gross Error

Medical Devices with a High Incidence of Human Error

Modelling errors with respect to choice of

On-line multiscale filtering of data with Gaussian errors

Operating defects while pumping with gas ballast Potential sources of error where the required ultimate pressure is not achieved

Operating errors with diffusion and vapor-jet pumps

Problem with statistical sampling error

Saturated calomel electrode errors with

Some of the Many Unpublished Errors Created with Hoses

Titration error with acid/base indicators

Unweighted Linear Regression with Errors in

Unweighted linear regression, with errors

Weighted Linear Regression with Errors in

Weighted linear regression with errors

© 2024 chempedia.info