Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Precision representative methods

Replicate Analyses. Confidence in the test result is improved by reducing the measurement variabihty. This variabihty in repeat analyses is known as precision. One method to improve the precision of the measurement is to perform complete rephcate analyses of the same sample beginning with the sample preparation (26). This is appropriate when the sample is known to be representative of the material sampled. When this is not the case, multiple samples should be taken for analysis. [Pg.367]

If the experimental values P and w are closely reproduced by the correlating equation for g, then these residues, evaluated at the experimental values of X, scatter about zero. This is the result obtained when the data are thermodynamically consistent. When they are not, these residuals do not scatter about zero, and the correlation for g does not properly reproduce the experimental values P and y . Such a correlation is, in fact, unnecessarily divergent. An alternative is to process just the P-X data this is possible because the P-x -y data set includes more information than necessary. Assuming that the correlating equation is appropriate to the data, one merely searches for values of the parameters Ot, b, and so on, that yield pressures by Eq. (4-295) that are as close as possible to the measured values. The usual procedure is to minimize the sum of squares of the residuals 6P. Known as Barkers method Austral. ]. Chem., 6, pp. 207-210 [1953]), it provides the best possible fit of the experimental pressures. When the experimental data do not satisfy the Gibbs/Duhem equation, it cannot precisely represent the experimental y values however, it provides a better fit than does the procedure that minimizes the sum of the squares of the 6g residuals. [Pg.537]

Simply on the basis of the normal composition of marine organisms, we would expect proteins and peptides to be normal constituents of the dissolved organic carbon in seawater. While free amino acids might be expected as products of enzymic hydrolysis of proteins, the rapid uptake of these compounds by bacteria would lead us to expect that free amino acids would normally constitute a minor part of the dissolved organic pool. This is precisely what we do find the concentration of free amino acids seldom exceeds 150 xg/l in the open ocean. It would be expected that the concentration of combined amino acids would be many times as great. There have been relatively few measurements of proteins and peptides, and most of the measurements were obtained by measuring the free amino acids before and after a hydrolysis step. Representative methods of this type have been described [245-259]. Since these methods are basically free amino acid methods, they will be discussed next in conjunction with those methods. [Pg.407]

The precision of an analytical method is usually expressed as the standard deviation or relative standard deviation (coefficient of variation) of a series of measurements. Precision represents repeatability or reproducibility of the analytical method under normal operating conditions. Precision determinations permit an estimate of the reliability of single determinations and are commonly in the range of 0.3 to 3% for dosage form assays. [Pg.438]

All calculations discussed here have been chosen as representative of various computational methods and no attempt at encyclopedic coverage of calculations on simple atomic and molecular systems has been made. A somewhat more comprehensive discussion of computational methods for atoms and molecules can be found in Sanders [4] while Morgan [5] also reviews recent developments in high-precision computational methods. [Pg.370]

SEC is presently the most important method for separation and moleeular characterization of synthetic polymers. The method enjoys enormous popularity and most institutions involved in research, production, testing and apphea-tion of synthetie polymers are equipped at least with a simple SEC instrument. Size exclusion chromatograms are often directly transformed into the molar mass dispersity functions (compare section 11.3.3, Molar Mass Dispersity). Often, the molar mass data presented are not absolute, beeause polystyrene or other polymer standards distinct from polymer under study have been employed for the column calibration (see sections 11.6.3 and 11.7.3.1). Still, the data equivalent to the polymer applied to the column cahbration, more or less precisely represent the tendencies of molar mass evolution in the course of building-up or decomposition polyreactions. [Pg.284]

There are methods that automate some of these steps. They are called composite methods because they combine results from several calculations to estimate the result that would be obtained from a more expensive calculation. The most popular families of composite methods are represented by Gaussian-3 (G3) theory [68,109] and CBS-APNO theory [110,111], where CBS stands for complete basis set. Both families of methods, which are considered reliable, include empirical parameters. The CBS theories incorporate an analytical basis set extrapolation based on perturbation theory, which is in contrast to the phenomenological extrapolation mentioned above. When the Gaussian software is used to perform these calculations, steps 2-, above, are performed automatically, with the result labeled G3 enthalpy (or the Hke) in the output file [20,99]. The user must still choose a reaction (step 1) and manipulate the molecular enthalpies (steps 5 and 6). The most precise composite methods are the Weizmann-n methods, which however are very intensive computationally [112]. [Pg.28]

FIGURE 3.4 The difference between ATP criteria as described here (parabolic-shaped curve) and traditional acceptance criteria, where accuracy and precision are established independently (rectangular area). The ATP criteria The measured value is within 1.0% of the true value with 95% probability. Traditional criteria No more than 1.0% bias and no more than 1.0% standard deviation. The x axis represents method accuracy (bias, % deviation from true value) and the y axis represents method precision (tr, std). [Pg.68]

A receptor site may be represented in various ways, and the raw data may come from high resolution crystal structures or low resolution pharmacophore models. Scoring may be rule based or energy based, and it may be fast and approximate or slow and precise. Some methods systematically try every possibility at every step, whereas others randomly make attempts and keep the ones... [Pg.53]

One representative method for CO2 detection is by infrared absorption analysis. An accurate measurement can be achieved by this method, but the equipment is relatively large-scale and expensive. Continuous maintenance is required to maintain calibration and precision. In addition, detection by the infrared method is not simple since practical measurements require the removal of other gases which influence the infrared absorption. [Pg.244]

The results obtained with the two methods confirm the measured data with a good precision, with less computational time for the specialised code than the general code. This validation on three representative test bloeks can lead to many applications of modelling of the thin-skin regime. [Pg.147]

Another troublesome aspect of the reactivity ratios is the fact that they must be determined and reported as a pair. It would clearly simplify things if it were possible to specify one or two general parameters for each monomer which would correctly represent its contribution to all reactivity ratios. Combined with the analogous parameters for its comonomer, the values rj and t2 could then be evaluated. This situation parallels the standard potential of electrochemical cells which we are able to describe as the sum of potential contributions from each of the electrodes that comprise the cell. With x possible electrodes, there are x(x - l)/2 possible electrode combinations. If x = 50, there are 1225 possible cells, but these can be described by only 50 electrode potentials. A dramatic data reduction is accomplished by this device. Precisely the same proliferation of combinations exists for monomer combinations. It would simplify things if a method were available for data reduction such as that used in electrochemistry. [Pg.444]

Size. The precise determination of particle size, usually referred to as the particle diameter, can actually be made only for spherical particles. For any other particle shape, a precise determination is practically impossible and particle size represents an approximation only, based on an agreement between producer and consumer with respect to the testing methods (see Size measurement of particles). [Pg.179]

Selection of appropriate time intei vals for increment extractions relates to property variation (inhomogeneity) within material flow streams. Ten minute extraction intei vals are generally adequate to obtain suitably representative samples from material flows under practical circumstances. Precise determination of extraction intei vals consistent with individual apphcations can be calculatedthrough autocorrelation of historical sampling data, a statistical method described in references (Gy, Pitard). [Pg.1760]

Figure 2.5-1 illustrates the fact that probabilities are not precisely known but may be represented by a "bell-like" distribution the amplitude of which expresses the degree of belief. The probability that a system will fail is calculated by combining component probabilities as unions (addition) and intersection (multiplication) according to the system logic. Instead of point values for these probabilities, distributions are used which results in a distributed probabilitv of system fadure. This section discusses several methods for combining distributions, namely 1) con olution, 2i moments method, 3) Taylor s series, 4) Monte Carlo, and 5) discrete probability distributions (DPD). [Pg.56]

Some of this information is presented in the form of very specific examples, used to illustrate a range of approaches and methods. TTiis range is not intended to be all-inclusive your company s precise oiganization may not be represented here. However, every effort has been made to address several common structures, including centralized and decentralized operations and "top-down" and "bottom-up" management structures. [Pg.4]

Two alternative methods have been used in kinetic investigations of thermal decomposition and, indeed, other reactions of solids in one, yield—time measurements are made while the reactant is maintained at a constant (known) temperature [28] while, in the second, the sample is subjected to a controlled rising temperature [76]. Measurements using both techniques have been widely and variously exploited in the determination of kinetic characteristics and parameters. In the more traditional approach, isothermal studies, the maintenance of a precisely constant temperature throughout the reaction period represents an ideal which cannot be achieved in practice, since a finite time is required to heat the material to reaction temperature. Consequently, the initial segment of the a (fractional decomposition)—time plot cannot refer to isothermal conditions, though the effect of such deviation can be minimized by careful design of equipment. [Pg.41]

The semiempirical methods represent a real alternative for this research. Aside from the limitation to the treatment of only special groups of electrons (e.g. n- or valence electrons), the neglect of numerous integrals above all leads to a drastic reduction of computer time in comparison with ab initio calculations. In an attempt to compensate for the inaccuracies by the neglects, parametrization of the methods is used. Meaning that values of special integrals are estimated or calibrated semiempirically with the help of experimental results. The usefulness of a set of parameters can be estimated by the theoretical reproduction of special properties of reference molecules obtained experimentally. Each of the numerous semiempirical methods has its own set of parameters because there is not an universial set to calculate all properties of molecules with exact precision. The parametrization of a method is always conformed to a special problem. This explains the multiplicity of semiempirical methods. [Pg.179]


See other pages where Precision representative methods is mentioned: [Pg.189]    [Pg.3965]    [Pg.189]    [Pg.281]    [Pg.307]    [Pg.605]    [Pg.4105]    [Pg.402]    [Pg.52]    [Pg.165]    [Pg.426]    [Pg.133]    [Pg.1419]    [Pg.1393]    [Pg.1416]    [Pg.23]    [Pg.2291]    [Pg.98]    [Pg.195]    [Pg.312]    [Pg.131]    [Pg.436]    [Pg.84]    [Pg.112]    [Pg.303]    [Pg.6]    [Pg.417]    [Pg.1181]    [Pg.524]    [Pg.16]   
See also in sourсe #XX -- [ Pg.505 , Pg.576 ]




SEARCH



Method precision

Representative methods

© 2024 chempedia.info