Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Undetectable error

American engineers are probably more familiar with the magnitude of physical entities in U.S. customary units than in SI units. Consequently, errors made in the conversion from one set of units to the other may go undetected. The following six examples will show how to convert the elements in six dimensionless groups. Proper conversions will result in the same numerical value for the dimensionless number. The dimensionless numbers used as examples are the Reynolds, Prandtl, Nusselt, Grashof, Schmidt, and Archimedes numbers. [Pg.43]

Validation versus Rectification The goal of both rectification and validation is the detecI ion and identification of measurements that contain systematic error. Rectification is typically done simultaneously with reconciliation using the reconciliation resiilts to identify measurements that potentially contain systematic error. Vahdation typically rehes only on other measurements and operating information. Consequently, vahdation is preferred when measurements and their supporting information are hmited. Further, prior screening of measurements limits the possibihty that the systematic errors will go undetected in the rectification step and subsequently be incorporated into any conclusions drawn during the interpretation step. [Pg.2566]

The methods discussed in the technical hterature are not exact. Numerical simulations of plant performance show that gross errors frequently remain undetected when they are present, or measurements are isolated as containing gross errors when they do not contain any. [Pg.2571]

The root locus method provides a very powerful tool for control system design. The objective is to shape the loci so that closed-loop poles can be placed in the. v-plane at positions that produce a transient response that meets a given performance specification. It should be noted that a root locus diagram does not provide information relating to steady-state response, so that steady-state errors may go undetected, unless checked by other means, i.e. time response. [Pg.132]

One assay falls out of trend by about 4% this could be a laboratory error (see Section 4.32) that, by chance, does not generate an OOS result and so goes undetected. This out-of-trend (OOT) result could... [Pg.312]

Non-linearity during progession of enzyme reaction or an initial lag phase which is undetected and may result in significant error. [Pg.185]

The apphed pretreatment techniques were digestion with a combination of acids in the pressurized or atmospheric mode, programmed dry ashing, microwave digestion and irradiation with thermal neutrons. The analytical methods of final determination, at least four different for each element, covered all modern plasma techniques, various AAS modes, voltammetry, instrumental and radiochemical neutron activation analysis and isotope dilution MS. Each participating laboratory was requested to make a minimum of five independent rephcate determinations of each element on at least two different bottles on different days. Moreover, a series of different steps was undertaken in order to ensure that no substantial systematic errors were left undetected. [Pg.65]

A variety of factors may be responsible for apparent lack of response to therapy. It is possible that the disease is not infectious or nonbacterial in origin, or there is an undetected pathogen. Other factors include those directly related to drug selection, the host, or the pathogen. Laboratory error in identification and/or susceptibility testing errors are rare. [Pg.398]

Any errors or misinterpretations contained in this manual are solely our responsibility. We have done our best to make this manual error free, but because of its great size and complexity, some errors undoubtedly have gone undetected. Any errors that come to light will, of course, be dealt with during the next round of corrections. [Pg.4]

As the number of blind embedding increases, the number of undetected bits also increases as a sequence. Of course, audio quality degrades quickly as TV increases. In case of Japanese Pop Music, all 30 bits are detected successfully at low error rate even after 100 times of multiple embedding. Regarding other different types of music, the simulation results are similar. [Pg.15]

Failures can either be fail-safe or fail dangerously. Fail safe incidents may be initiated by spurious trips that may result in accidental shutdown of equipment or processes. Fail dangerously incidents are initiated by undetected process design errors or operations, which disable the safety interlock. The fail dangerously activation may also result in accidental process liquid or gas releases, equipment damage, or fire and explosions. [Pg.118]

The advantages of a single standard are that it takes less time and is less involved. The series of standards method is preferred because we do not rely on just one data point (for which an error may go undetected) and we can establish the response over a range of concentrations rather than at just one concentration. [Pg.516]

Referring back to Matlab, it is very important to use the correct slash operator or / for the left and right pseudo inverse. Applying the wrong one will invariably result in an error message or worse, in a potentially undetected error. [Pg.142]

An important consequence of the presence of the metal surface is the so-called infrared selection rule. If the metal is a good conductor the electric field parallel to the surface is screened out and hence it is only the p-component (normal to the surface) of the external field that is able to excite vibrational modes. In other words, it is only possible to excite a vibrational mode that has a nonvanishing component of its dynamical dipole moment normal to the surface. This has the important implication that one can obtain information by infrared spectroscopy about the orientation of a molecule and definitely decide if a mode has its dynamical dipole moment parallel with the surface (and hence is undetectable in the infrared spectra) or not. This strong polarization dependence must also be considered if one wishes to use Eq. (1) as an independent way of determining ft. It is necessary to put a polarizer in the incident beam and use optically passive components (which means polycrystalline windows and mirror optics) to avoid serious errors. With these precautions we have obtained pretty good agreement for the value of n determined from Eq. (1) and by independent means as will be discussed in section 3.2. [Pg.3]

With these data and Darcy s law, the in-plane viscous permeabilihes were determined. Only the viscous permeability coefficient was determined because it was claimed that the inertial component was undetectable within the error limits of measuremenf for fhese fesfs. If is imporfanf to mention fhaf fhis technique could also be used to measure fhe permeabilify of diffusion layers wifh different fluids, such as liquid wafer. [Pg.264]

The oxidation of butane on these orthovanadates were tested at 500°C in a flow reactor using a butane oxygen helium ratio of 4 8 88. The observed products were isomers of butene, butadiene, CO, and CO2. The carbon balance in these experiments were within experimental errors, thus the amount of any undetected product if present should be small. The selectivity for dehydrogenation (butenes and butadiene) was found to depend on the butane conversion and be quite different for different orthovanadates. Fig. 4 shows the selectivity for dehydrogenation at 12.5% conversion of butane [15,18,19]. Its value ranged from a high of over 60% for Mg3(V04)2 to a low of less than 5% for... [Pg.399]

The standard addition method of calibration (see Chapter 1) is often used to combat the uncertainties of varying interference effects in electrothermal atomization. However, care should be taken with this approach, as errors from spurious blanks and background may go undetected. It must also be emphasized that the technique of standard additions does not correct for all types of interference. [Pg.69]

Accuracy. The more accurate the sampling method the better. Given the very large environmental variability, however, sampling and analytical imprecision is rardy a significant contribution to overall error, or width of confidence limits, of the final result. Even highly imprecise methods, such as dust count methods, do not add much to overall variability when the variability between workers and overtime is considered. An undetected bias, however, is more serious because such bias is not considered by the statistical analysis and can, therefore, result in gross unknown error. [Pg.108]

For example, the output of a glass electrode (in mV) plotted against the antilog of activity of hydrogen ion yields a linear pH scale. It is the simplest form of performing analysis. This simplicity comes with a price, however. If the sample is contaminated by an unknown impurity, or if the response function 91 changes for whatever reason, an undetectable error accrues. Therefore, the first-order analysis relies on the invariability of the experimental conditions. [Pg.314]

When there are intervariable correlations, another source of error associated with the OVAT approach appears, the so-called type II error. This means that a true difference is spuriously undetected. The Bonferoni adjustment of p-values is one rich source of increased type II errors in univariate analysis of multivariate data. This is easily realized if a situation is considered where the effect of a drug has been recorded on one relevant variable and nine irrelevant variables. The Bonferoni adjustment would in this case obscure the truly significant change in the relevant variable by the compensation for the irrelevant variables. [Pg.297]

The principle here is general, but the absence of a break in a linear correlation does not exclude an intermediate. Values of A/9 or Ap maybe so small, or errors in determining the slopes of the components so large, that the break point is undetectable. Equally, prediction of the values of the parameters at which the break point occurs (i = k2) is not straightforward and it may lie outside the range of available reactants. The technique has been applied most often in the demonstration of non-accumulating intermediates in substitution processes, both at carbon and at other elements, in quasi-symmetrical reactions [50]. [Pg.257]

The same data collection and reduction techniques are commonly used by the same workers for many different polymers. Therefore, data for these other polymers may contain errors on a similar scale, but that the errors have usually, but not always, gone undetected (8). If more than 500 reflections are observed, from single crystals of simple molecules, recognizable electron-density distributions have been derived from visually estimated data classified only a "weak", "medium" or "strong". The calculation of the structure becomes more sensitive to the accuracy of the intensity data as the number of data points approaches the number of variables in the structure. One problem encountered in crystal structure analyses of fibrous polymers is that of a very limited number of reflections (low data to parameter ratio). In addition, fibrous polymers usually scatter x-rays too weakly to be accurately measured by ionization or scintillation counter techniques. Therefore, the need for a critical study of the photographic techniques of obtaining accurate diffraction intensities is paramount. [Pg.93]

The limit of all integral methods is the number of individuals required to change the result measurably or the number of >error< vials which pass undetected. [Pg.286]

For BTM and DR data the undetected number of >error< vials cannot be given as a percentage of the total number as it depends, on the accuracy of Tice and DR, on the ratio of chamber volume to solid content of the charge and the individual magnitude of deviation from the average (see Section 1.2.3 and Figures 1.85.2 and 1.85.3). If the required dW is specified as e.g. <1.5% the probability of >error< vials is extremely small if the specification requests, e.g. 1.2% > dW > 0.6%, the probability has to be evaluated a ratio between solids (g) and chamber volume (Vch) > 1, seven min intervals between DR measurements, 90 s pressure rise time, a relatively flat DR plot with time and shielding of the vials/product from wall and door influences can reduce the probability of undetected errors by a factor of 100 or more. [Pg.286]

If concurrency is employed, the documentation should specify how limits are enforced on the maximum allowable degree of concurrency and how accesses to shared data are synchronized in order to avoid (possibly undetected) run-time errors. [Pg.268]


See other pages where Undetectable error is mentioned: [Pg.16]    [Pg.147]    [Pg.16]    [Pg.147]    [Pg.108]    [Pg.19]    [Pg.1168]    [Pg.316]    [Pg.248]    [Pg.271]    [Pg.63]    [Pg.183]    [Pg.148]    [Pg.495]    [Pg.125]    [Pg.170]    [Pg.279]    [Pg.297]    [Pg.223]    [Pg.137]    [Pg.38]    [Pg.141]    [Pg.49]    [Pg.129]    [Pg.190]   
See also in sourсe #XX -- [ Pg.315 ]




SEARCH



© 2024 chempedia.info