Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

The Analysis of Errors

An early article on the error caused by the application of the QSSA was written by Frank-Kamenetskii (1940), who is perhaps better known for theories oti reactor stability and flame modelling. This very brief article received only a few citations over several decades following its publication (Benson 1952 Sayasov and Vasil eva 1955 Rice 1960). Turanyi and T6th (1992) published an Enghsh translation of Frank-Kamenetskii s article with detailed comments. Further development and generalisation (Tur yi et al. 1993b) of the reasoning of Frank-Kamenetskii allows the calculation of the error caused by the QSSA and is detailed below. [Pg.234]

On the application of the QSSA and using the notation introduced in Sect. 7.8.1, the Jacobian can be divided into four submatrices  [Pg.234]

At the beginning of reaction kinetic simulations, usually the concentrations of only a few species (e.g. reactants, diluent gases, etc.) are defined, and other concentrations are set to zero. The QSSA is not usually applicable from the beginning of the simulation since at this point, the trajectories are quite far from any underlying slow manifolds (see Sect. 6.5). Hence, the kinetic system of ODEs (7.69) is usually solved first, and at time ti is switched to the solution of the DAE system (7.70-7.71). We denote Y(fi)=(Y Vi) Y (fi)) to be the solution of Eq. (7.69) at time t. When the system of Eqs. (7.70-7.71) is used, then the concentrations of the QSS-species are calculated first via the solution of algebraic system of Eqs. (7.71), and the result is concentration vector The concentrations of [Pg.234]

The local error of the QSS A at time is given by the following vector (Tur yi et al. 1993b)  [Pg.235]

We now calculate the Taylor expansion of function at variable values y(ti)  [Pg.235]


The cognitive approach has had a major influence in recent years on how human error is treated in systems such as chemical process plants and nuclear power generation. In the next section we shall describe some of the key concepts that have emerged from this work, and how they apply to the analysis of error in the CPI. Discussion of the cognitive view of human performance are contained in Reason (1990), Hollnagel (1993), Kantowitz and Fujita (1990), Hollnagel and Woods (1983), and Woods and Roth (1990). [Pg.68]

The next three chapters deal with the most widely used classes of methods free energy perturbation (FEP) [3], methods based on probability distributions and histograms, and thermodynamic integration (TI) [1, 2], These chapters represent a mix of traditional material that has already been well covered, as well as the description of new techniques that have been developed only recendy. The common thread followed here is that different methods share the same underlying principles. Chapter 5 is dedicated to a relatively new class of methods, based on calculating free energies from nonequilibrium dynamics. In Chap. 6, we discuss an important topic that has not received, so far, sufficient attention - the analysis of errors in free energy calculations, especially those based on perturbative and nonequilibrium approaches. [Pg.523]

It has not yet been proven whether equation (187) or (180) provides the better description of the experimental results. Problems concerning the analysis of errors in Ea and A are virtually the same as those appearing in the case of the Eyring equation. [Pg.284]

Love A., Mammino L. (1997). Using the Analysis of Errors to Improve Students Expression in the Sciences, Zimbabwe Journal of Educational Research, 9(1), 1-17. [Pg.223]

A model procedure recommended for the determination of the precision (and of the sensitivity and detection limit) of spectrophotometric methods, based on experimental data and lUPAC recommendations has been developed [30], and the analysis of errors occurring in determinations of mixture components by means of modem computerized techniques, has been published [31,32]. [Pg.43]

For the validation of a simulation, one must solve the equations that incorporate the model correctly [29, 30]. The validation includes the analysis of errors of discretization and modeling. Discretization errors can be reduced through proper allocation of mesh points, and they should be small compared with the uncertainty of the experiment. Comparison with experimental data requires that the accuracy of the experiments be known. [Pg.167]

For the analysis of errors in the system and evaluate the system reliability were used the event trees, which are the graphical cause-and-effect relationships models occurring during the solving problems and are built from the initiating events. Trees represent all the possible sequences of events that are initiating events consequence and allow to specify the probability of an adverse event. [Pg.2420]

The normal distribution of measurements (or the normal law of error) is the fundamental starting point for analysis of data. When a large number of measurements are made, the individual measurements are not all identical and equal to the accepted value /x, which is the mean of an infinite population or universe of data, but are scattered about /x, owing to random error. If the magnitude of any single measurement is the abscissa and the relative frequencies (i.e., the probability) of occurrence of different-sized measurements are the ordinate, the smooth curve drawn through the points (Fig. 2.10) is the normal or Gaussian distribution curve (also the error curve or probability curve). The term error curve arises when one considers the distribution of errors (x — /x) about the true value. [Pg.193]

Example 7 A new method for the analysis of iron using pure FeO was replicated with five samples giving these results (in % Fe) 76.95, 77.02, 76.90, 77.20, and 77.50. Does a systematic error exist ... [Pg.199]

A method for the analysis of Ca + in water suffers from an interference in the presence of Zn +. When the concentration of Ca + is 100 times greater than that of Zn +, an analysis for Ca + gives a relative error of -1-0.5%. What is the selectivity coefficient for this method ... [Pg.40]

A proportional determinate error, in which the error s magnitude depends on the amount of sample, is more difficult to detect since the result of an analysis is independent of the amount of sample. Table 4.6 outlines an example showing the effect of a positive proportional error of 1.0% on the analysis of a sample that is 50.0% w/w in analyte. In terms of equations 4.4 and 4.5, the reagent blank, Sreag, is an example of a constant determinate error, and the sensitivity, k, may be affected by proportional errors. [Pg.61]

Guedens, W. J. Yperman, J. Mullens, J. et al. Statistical Analysis of Errors A Practical Approach for an Undergraduate Ghemistry Lab, Part 1. The Goncept, /. Chem. Educ. 1993, 70, 776-779 Part 2. Some Worked Examples, /. Chem. Educ. 1993, 70, 838-841. [Pg.102]

In this experiment the overall variance for the analysis of potassium hydrogen phthalate (KHP) in a mixture of KHP and sucrose is partitioned into that due to sampling and that due to the analytical method (an acid-base titration). By having individuals analyze samples with different % w/w KHP, the relationship between sampling error and concentration of analyte can be explored. [Pg.225]

When using a spectrophotometer for which the precision of absorbance measurements is limited by the uncertainty of reading %T, the analysis of highly absorbing solutions can lead to an unacceptable level of indeterminate errors. Consider the analysis of a sample for which the molar absorptivity is... [Pg.455]

The data used to construct a two-sample chart can also be used to separate the total variation of the data, Otot> into contributions from random error. Grand) and systematic errors due to the analysts, Osys. Since an analyst s systematic errors should be present at the same level in the analysis of samples X and Y, the difference, D, between the results for the two samples... [Pg.689]

Spike recoveries for samples are used to detect systematic errors due to the sample matrix or the stability of the sample after its collection. Ideally, samples should be spiked in the field at a concentration between 1 and 10 times the expected concentration of the analyte or 5 to 50 times the method s detection limit, whichever is larger. If the recovery for a field spike is unacceptable, then a sample is spiked in the laboratory and analyzed immediately. If the recovery for the laboratory spike is acceptable, then the poor recovery for the field spike may be due to the sample s deterioration during storage. When the recovery for the laboratory spike also is unacceptable, the most probable cause is a matrix-dependent relationship between the analytical signal and the concentration of the analyte. In this case the samples should be analyzed by the method of standard additions. Typical limits for acceptable spike recoveries for the analysis of waters and wastewaters are shown in Table 15.1. ... [Pg.711]

The principal tool for performance-based quality assessment is the control chart. In a control chart the results from the analysis of quality assessment samples are plotted in the order in which they are collected, providing a continuous record of the statistical state of the analytical system. Quality assessment data collected over time can be summarized by a mean value and a standard deviation. The fundamental assumption behind the use of a control chart is that quality assessment data will show only random variations around the mean value when the analytical system is in statistical control. When an analytical system moves out of statistical control, the quality assessment data is influenced by additional sources of error, increasing the standard deviation or changing the mean value. [Pg.714]

In spite of the compounding of errors to which it is subject, the foregoing method was the best procedure for measuring reactivity ratios until the analysis of microstructure became feasible. Let us now consider this development. [Pg.460]

The procedure of simultaneous extracting-spectrophotometric determination of nitrophenols in wastewater is proposed on the example of the analysis of mixtures of mono-, di-, and trinitrophenols. The procedure consists of extraction concentrating in an acid medium, and sequential back-extractions under various pH. Such procedures give possibility for isolation o-, m-, p-nitrophenols, a-, P-, y-dinitrophenols and trinitrophenol in separate groups. Simultaneous determination is carried out by summary light-absorption of nitrophenol-ions. The error of determination concentrations on maximum contaminant level in natural waters doesn t exceed 10%. The peculiarities of application of the sequential extractions under fixed pH were studied on the example of mixture of simplest phenols (phenol, o-, m-, />-cresols). The procedure of their determination is based on the extraction to carbon tetrachloride, subsequent back-extraction and spectrophotometric measurement of interaction products with diazo-p-nitroaniline. [Pg.126]

The analysis of accidents and disasters in real systems makes it clear that it is not sufficient to consider error and its effects purely from the perspective of individual human failures. Major accidents are almost always the result of multiple errors or combinations of single errors with preexisting vulnerable conditions (Wagenaar et al., 1990). Another perspective from which to define errors is in terms of when in the system life cycle they occur. In the following discussion of the definitions of human error, the initial focus will be from the engineering and the accident analysis perspective. More detailed consideration of the definitions of error will be deferred to later sections in this chapter where the various error models will be described in detail (see Sections 5 and 6). [Pg.39]

These explanations do not exhaust the possibilities with regard to underlying causes, but they do illustrate an important point the analysis of human error purely in terms of its external form is not sufficient. If the underlying causes of errors are to be addressed and suitable remedial strategies developed, then a much more comprehensive approach is required. This is also necessary from the predictive perspective. It is only by classifying errors on the basis of underlying causes that specific types of error can be predicted as a function of the specific conditions under review. [Pg.69]

As implied in the diagram representing the GEMS model (Figure 2.5) and discussed in Section 2.6.3, certain characteristic error forms occur at each of the three levels of performance. This information can be used by the human-reliability analyst for making predictions about the forms of error expected in the various scenarios that may be considered as part of a predictive safety analysis. Once a task or portion of a task is assigned to an appropriate classification, then predictions can be made. A comprehensive set of techniques for error prediction is described in Chapter 5. [Pg.79]


See other pages where The Analysis of Errors is mentioned: [Pg.217]    [Pg.314]    [Pg.68]    [Pg.156]    [Pg.37]    [Pg.359]    [Pg.102]    [Pg.234]    [Pg.217]    [Pg.314]    [Pg.68]    [Pg.156]    [Pg.37]    [Pg.359]    [Pg.102]    [Pg.234]    [Pg.302]    [Pg.338]    [Pg.391]    [Pg.683]    [Pg.710]    [Pg.384]    [Pg.448]    [Pg.742]    [Pg.2270]    [Pg.2578]    [Pg.36]    [Pg.74]    [Pg.366]    [Pg.224]    [Pg.509]    [Pg.231]    [Pg.45]    [Pg.78]   


SEARCH



Analysis of errors

Error Analysis of the Analytical Method

Error analysis

© 2024 chempedia.info