Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Errors from data processing

The composite envelope is then plotted over the envelope of each individual peak. It is seen that the actual retention difference, if taken from the maxima of the envelope, will give a value of less than 80% of the true retention difference. Furthermore as the peaks become closer this error increases rapidly. Unfortunately, this type of error is not normally taken into account by most data processing software. It follows that, if such data was used for solute identification, or column design, the results can be grossly in error. [Pg.168]

Three major themes have been emphasized in this chapter. The first is that an effective data collection system is one of the most powerful tools available to minimize human error. Second, data collection systems must adequately address underlying causes. Merely tabulating accidents in terms of their surface similarities, or using inadequate causal descriptions such as "process worker failed to follow procedures" is not sufficient to develop effective remedial strategies. Finally, a successful data collection and incident investigation system requires an enlightened, systems oriented view of human error to be held by management, and participation and commitment from the workforce. [Pg.291]

For an aquatic model of chemical fate and transport, the input loadings associated with both point and nonpoint sources must be considered. Point loads from industrial or municipal discharges can show significant daily, weekly, or seasonal fluctuations. Nonpoint loads determined either from data or nonpoint loading models are so highly variable that significant errors are likely. In all these cases, errors in input to a model (in conjunction with output errors, discussed below) must be considered in order to provide a valid assessment of model capabilities through the validation process. [Pg.159]

During normal operation of a chemical plant it is common practice to obtain data from the process, such as flowrates, compositions, pressures, and temperatures. The numerical values resulting from the observations do not provide consistent information, since they contain some type of error, either random measurement errors or gross biased errors. This means that the conservation equations (mass and energy), the common functional model chosen to represent operation at steady state, are not satisfied exactly. [Pg.23]

Parameter estimation is also an important activity in process design, evaluation, and control. Because data taken from chemical processes do not satisfy process constraints, error-in-variable methods provide both parameter estimates and reconciled data estimates that are consistent with respect to the model. These problems represent a special class of optimization problem because the structure of least squares can be exploited in the development of optimization methods. A review of this subject can be found in the work of Biegler et al. (1986). [Pg.25]

Most techniques for process data reconciliation start with the assumptions that the measurement errors are random variables obeying a known statistical distribution and that the covariance matrix of measurement errors ( J>) is given. In contrast, in this chapter we discuss direct and indirect approaches for estimating the variances of measurement errors from sample data. Furthermore, a robust strategy is presented for dealing with the presence of outliers in the data set. [Pg.202]

Only a few publications in the literature have dealt with this problem. Almasy and Mah (1984) presented a method for estimating the covariance matrix of measured errors by using the constraint residuals calculated from available process data. Darouach et al. (1989) and Keller et al. (1992) have extended this approach to deal with correlated measurements. Chen et al. (1997) extended the procedure further, developing a robust strategy for covariance estimation, which is insensitive to the presence of outliers in the data set. [Pg.203]

The relationship takes into account that the day-to-day samples determined are subject to error from several sources random error, instrument error, observer error, preparation error, etc. This view is the basis of the process of fitting data to a model, which results in confidence intervals based on the intrinsic lack of fit and the random variation in the data. [Pg.186]

Automate Declslon-maklng Process. For dally operation, TOGA Is run automatically from gas chromatograph output and data bases (as opposed to Interactively) to generate expert Interpretation of data, this speeds up the data analysis task and removes the element of human error from routine diagnoses. [Pg.26]

Whereas the microprocessor controls an individual basic operation, the central computer, which has all the analytical procedures held in its memory, controls the particular analytical procedure required. At the appropriate time, the central computer transmits the relevant set of parameters to the corresponding units and provides the schedule for the sample-transport operation. All units are monitored to ensure proper functioning. If one of the units signals an error, a predetermined action, such as disposing of the sample, is taken. The basic results from the units are transferred to the central computer, the final results are calculated, and the report is passed to the output terminal. These results can also be transmitted to other data processing equipment for administrative or management purposes. The central control is, therefore, the leading element in a hierarchy of... [Pg.42]

The LTQ-Orbitrap has resolution and mass accuracy performance close to that of the LTQ-FTICR. As shown in Table 5.3 (column 4), LTQ-Orbitrap accurate mass measurements, using external calibration, for a set of 30 pharmaceutical compounds resulted in less than 2.3 ppm error. The data were acquired with a 4-min, 1-mL/min-flow-rate, positive-mode LC-ESI-MS method where all measurements were performed within 5h from mass calibration. Mass accuracies below 2-3 ppm, and often below 1 ppm, can be routinely achieved in both the positive- and negative-ion mode (Table 5.3, columns 4 and 5). The long-term mass stability of the LTQ-Orbitrap is not as consistent as observed for the LTQ-FTICR-MS, and the Orbitrap requires more frequent mass calibration however, mass calibration is a routine procedure that can be accomplished within 5-10 min. Figure 5.7 displays a 70-h (external calibration) mass accuracy plot for three negative ions collected with a LTQ-Orbitrap where the observed accuracy is 2.5 ppm or better with little mass drift for each ion. Overall, for routine accurate mass measurements on the Orbitrap, once-a-week calibration (for the desired polarity) is required however, considering the ease of the process, more frequent external calibration is not a burden. [Pg.204]

We find the answers to the four questions in the course of the data quality assessment, which is the scientific and statistical evaluation of data to determine if data obtained from environmental data operations are of the right type, quality, and quantity to support their intended use (EPA, 1997a). Part of DQA is data evaluation that enables us to find out whether the collected data are valid. Another part of the DQA, the reconciliation of the collected data with the DQOs, allows us to establish data relevancy. Thus, the application of the entire DQA process to collected data enables us to determine the effect of total error on data usability. [Pg.8]

A complete knowledge of the data quality that arises only from Level 4 validation enables the data user to make project decisions with the highest level of confidence in the data quality. That is why Level 4 validation is usually conducted for the data collected to support decisions related to human health. Level 4 validation allows the reconstruction of the entire laboratory data acquisition process. It exposes errors that cannot be detected during Level 3 validation, the most critical of which are data interpretation errors and data management errors, such as incorrect computer algorithms. [Pg.281]

Data processing methods provide a substitute for the missing description of analytical data. Repeated measurements of the same water source are occasionally available. In such cases a mean value may be calculated for each parameter (dissolved ions, pH, or temperature). The standard deviations of these values serve as an estimate of the analytical error. In fact, these standard deviations also incorporate the natural fluctuations in the measured water source. Therefore, these mean deviations serve as conservative estimates to the analytical errors. These estimated errors may then be applied to all the data reported from the same laboratory during the same period. Experience shows that old data are often good and acceptable. [Pg.107]


See other pages where Errors from data processing is mentioned: [Pg.734]    [Pg.1077]    [Pg.221]    [Pg.1642]    [Pg.1005]    [Pg.419]    [Pg.321]    [Pg.735]    [Pg.68]    [Pg.251]    [Pg.2]    [Pg.578]    [Pg.154]    [Pg.371]    [Pg.464]    [Pg.128]    [Pg.245]    [Pg.38]    [Pg.30]    [Pg.234]    [Pg.246]    [Pg.341]    [Pg.115]    [Pg.218]    [Pg.143]    [Pg.13]    [Pg.6]    [Pg.26]    [Pg.246]    [Pg.169]    [Pg.28]    [Pg.38]    [Pg.382]    [Pg.73]    [Pg.202]    [Pg.105]   
See also in sourсe #XX -- [ Pg.262 , Pg.263 ]




SEARCH



Data processing

Process data

Processing, data errors

© 2024 chempedia.info