Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Model analytical data errors

As in many such problems, some form of pretreatment of the data is warranted. In all applications discussed here, the analytical data either have been untreated or have been normalized to relative concentration of each peak in the sample. Quality Assurance. Principal components analysis can be used to detect large sample differences that may be due to instrument error, noise, etc. This is illustrated by using samples 17-20 in Appendix I (Figure 6). These samples are replicate assays of a 1 1 1 1 mixture of the standard Aroclors. Fitting these data for the four samples to a 2-component model and plotting the two first principal components (Theta 1 and Theta 2 [scores] in... [Pg.210]

Error Estimates. Little attention has been paid to errors associated with either the thermodynamic or the analytical data utilized in chemical models. M.L. Good (31) considers this to be one of the four principal problems of the day with regard to use and misuse of scientific data. It is essential that chemical models compute propagated standard deviations for their analytical, thermodynamic, and kinetic data. This provision exists in the model of Ball et al. (32). [Pg.12]

It appears that the largest source of error in these comparisons is the analytical data. The next largest source of error seems to be the adequacy of activity coefficients and stability constants used in the model and last is the reliability of the field Eh measurement. Close inspection of Figure 3 shows a slight bias of calculated Eh values towards more oxidizing potentials. Fe(III) complexes are quite strong and it is likely that some important complexes, possibly FeHSO " (5 4,5 5), should be included in the chemical model, but the thermodynamic data are not reliable enough to justify its use. [Pg.61]

Standard Deviations. In order to evaluate the effect on the modeling calculations of errors in the analytical input data, propagated standard deviations are now computed for a subset of the solid phase activity products considered in the model. Arrangements have also been made to enter and output standard deviations for thermodynamic data. [Pg.825]

Most importantly, however, the use of the experimental errors allows an objective judgement of the agreement between model and data, i.e., the validity of the conceptual model that was adopted to describe the data. The model selection is based on the x test. The expected minimum value of is the number of degrees of freedom v = n - m, where n is the number of data points and m is the number of free parameters. The probability p for to be higher than a given value due to random analytical errors, although the model description is correct, can be obtained from the x -distribution with v degrees of freedom. If p is lower than some cut-off value pc (pc = 0.01 proved to be appropriate), the model is rejected. [Pg.646]

Inadequate stoichiometry and poor calibration of the analytical device are interconnected problems. The kinetic model itself follows the stoichiometric rules, but an inadequate calibration of the analytical instrument causes systematic deviations. This can be illustrated with a simple example. Assume diat a bimolecular reaction, A + B P, is carried out in a liquid-phase batch reactor. The density of the reaction mixture is assumed to be constant. The reaction is started with A and B, and no P is present in the initial mixture. The concentrations are related by cp=CoA-Cj=Cob -Cb, i e. produced product, P, equals with consumed reactant. If the concentration of the component B has a calibration error, we get instead of the correct concentration cb an erroneous one, c n ncs, which does not fulfil the stoichiometric relation. If the error is large for a single component, it is easy to recognize, but the situation can be much worse calibration errors are present in several components and all of their effects are spread during nonlinear regression, in the estimation of the model parameters. This is reflected by the fact that the total mass balance is not fulfilled by the experimental data. A way to check the analytical data is to use some fonns of total balances, e.g. atom balances or total molar amounts or concentrations. For example, for the model reaction, A + B P, we have the relation ca+cb+cp -c()a+c0 -constant (again c0p=0). [Pg.447]

Statistical and algebraic methods, too, can be classed as either rugged or not they are rugged when algorithms are chosen that on repetition of the experiment do not get derailed by the random analytical error inherent in every measurement,i° 433 is, when similar coefficients are found for the mathematical model, and equivalent conclusions are drawn. Obviously, the choice of the fitted model plays a pivotal role. If a model is to be fitted by means of an iterative algorithm, the initial guess for the coefficients should not be too critical. In a simple calculation a combination of numbers and truncation errors might lead to a division by zero and crash the computer. If the data evaluation scheme is such that errors of this type could occur, the validation plan must make provisions to test this aspect. [Pg.146]

As probabilistic exposure and risk assessment methods are developed and become more frequently used for environmental fate and effects assessment, OPP increasingly needs distributions of environmental fate values rather than single point estimates, and quantitation of error and uncertainty in measurements. Probabilistic models currently being developed by the OPP require distributions of environmental fate and effects parameters either by measurement, extrapolation or a combination of the two. The models predictions will allow regulators to base decisions on the likelihood and magnitude of exposure and effects for a range of conditions which vary both spatially and temporally, rather than in a specific environment under static conditions. This increased need for basic data on environmental fate may increase data collection and drive development of less costly and more precise analytical methods. [Pg.609]

By trial-and-error it is possible to find out, which of the successive approximations is valid ymax can be measured or assessed from the beamline geometry. Together with q it can be varied within reasonable intervals, in order to fit analytical models for Ii (.v) (e.g., after Eq. (8.110) or Eq. (8.112) to measured data. [Pg.202]

A major advantage of the simple model described in this paper lies in its potential applicability to the direct evaluation of experimental data. Unfortunately, it is clear from the form of the typical isotherms, especially those for high polymers (large n) that, even with a simple model, this presents considerable difficulty. The problems can be seen clearly by consideration of some typical polymer adsorption data. Experimental isotherms for the adsorption of commercial polymer flocculants on a kaolin clay are shown in Figure 4. These data were obtained, in the usual way, by determination of residual polymer concentrations after equilibration with the solid. In general, such methods are limited at both extremes of the concentration scale. Serious errors arise at low concentration due to loss in precision of the analytical technique and at high concentration because the amount adsorbed is determined by the difference between two large numbers. [Pg.32]

Furthermore, it is sometimes questionable to use literature data for modeling purposes, as small variations in process parameters, reactor hydrodynamics, and analytical equipment limitations could skew selectivity results. To obtain a full product spectrum from an FT process, a few analyses need to be added together to form a complete picture. This normally involves analysis of the tail gas, water, oil, and wax fractions, which need to be combined in the correct ratio (calculated from the drainings of the respective phases) to construct a true product spectrum. Reducing the number of analyses to completely describe the product spectrum is one obvious way to minimize small errors compounding into large variations in... [Pg.231]

This is perhaps the "best solution for the given data set, and it is certainly the most interesting. It is not offered as a rigorous solution, however, for the lack of fit (x /df -[9.64]2) implies additional sources of error, which may be due to additional scatter about the calibration curve (oy -"between" component), residual error in the analytic model for the calibration function, or errors in the "standard" x-values. (We believe the last source of error to be the most likely for this data set.) For these reasons, and because we wish to avoid complications introduced by non-linear least squares fitting, we take the model y=B+Axl 12 and the relation Oy = 0.028 + 0.49x to be exact and then apply linear WLS for the estimation of B and A and their standard errors. [Pg.77]

Improved reference analysis If the method assessment reveals that the main source of model error comes from error in the reference analytical method used to generate the (y) calibration data, then efforts to improve the accuracy and precision of this method could prove to be very beneficial. [Pg.426]

The algorithm used is attributed to J. B. J. Read. For many manipulations on large matrices it is only practical for use with a fairly large computer. The data are arranged in two matrices by sample i and nuclide j one matrix, V, contains the amount of each nuclide in each sample the other matrix, E, contains the variances of these numbers, as estimated from counting statistics, agreement between replicate analyses, and known analytical errors. It is also possible to add an arbitrary term Fik to each variance to account for random effects between samples not considered in the model this is usually done in terms of an additional fractional error. Zeroes are inserted for missing data in cases in which not all nuclides were measured in every sample. [Pg.299]


See other pages where Model analytical data errors is mentioned: [Pg.38]    [Pg.29]    [Pg.22]    [Pg.11]    [Pg.309]    [Pg.181]    [Pg.18]    [Pg.311]    [Pg.166]    [Pg.320]    [Pg.487]    [Pg.146]    [Pg.141]    [Pg.149]    [Pg.396]    [Pg.29]    [Pg.386]    [Pg.648]    [Pg.463]    [Pg.146]    [Pg.37]    [Pg.95]    [Pg.389]    [Pg.475]    [Pg.124]    [Pg.50]    [Pg.62]    [Pg.32]    [Pg.154]    [Pg.171]    [Pg.265]    [Pg.143]    [Pg.44]    [Pg.376]    [Pg.205]   
See also in sourсe #XX -- [ Pg.9 ]




SEARCH



Analytical data

Analytical modeling

Data modeling

Error model

Error, analytical

Modelling, analytical

© 2024 chempedia.info