Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data treatment

Due to its principle and instrumental realisation, atomic absorption spectrometry is a technique for quantitative analysis and is practically unsuitable for qualitative analysis. Quantitative response is governed by the law of Lambert and Beer, i. e. the absorption A is proportional to the optical pathlength I, the absorption coefficient k at the observed wavelength, and the concentration c of the species. [Pg.465]

Quantitation by external calibration The most common and straightforward method of calibration in atomic absorption spectrometry is the use of an external calibration with suitable standard solutions. It is based on the assumption that the standard solution matches the composition of the sample sufficiently well. This is an assumption that must always be examined with care, since, for example, samples of different viscosity may be aspirated at different rates in flame AAS, [Pg.465]

Quantitation by the standard addition technique Matrix interferences result from the bulk physical properties of the sample, e.g viscosity, surface tension, and density. As these factors commonly affect nebulisation efficiency, they will lead to a different response of standards and the sample, particularly with flame atomisation. The most common way to overcome such matrix interferences is to employ the method of standard additions. This method in fact creates a calibration curve in the matrix by adding incremental sample amounts of a concentrated standard solution to the sample. As only small volumes of standard solutions are to be added, the additions do not alter the bulk properties of the sample significantly, and the matrix remains essentially the same. Since the technique is based on linear extrapolation, particular care has to be taken to ensure that one operates in the linear range of the calibration curve, otherwise significant errors may result. Also, proper background correction is essential. It should be emphasised that the standard addition method is only able to compensate for proportional systematic errors. Constant systematic errors can neither be uncovered nor corrected with this technique. [Pg.466]


The method is based on the international standard ISO 4053/IV. A small amount of the radioactive tracer is injected instantaneously into the flare gas flow through e.g. a valve, representing the only physical interference with the process. Radiation detectors are mounted outside the pipe and the variation of tracer concentration with time is recorded as the tracer moves with the gas stream and passes by the detectors. A control, supply and data registration unit including PC is used for on site data treatment... [Pg.1054]

Measurements have been made in a static laboratory set-up. A simulation model for generating supplementary data has been developed and verified. A statistical data treatment method has been applied to estimate tracer concentration from detector measurements. Accuracy in parameter estimation in the range of 5-10% has been obtained. [Pg.1057]

Note that we are interested in nj, the atomic quantum number of the level to which the electron jumps in a spectroscopic excitation. Use the results of this data treatment to obtain a value of the Rydberg constant R. Compare the value you obtain with an accepted value. Quote the source of the accepted value you use for comparison in your report. What are the units of R A conversion factor may be necessary to obtain unit consistency. Express your value for the ionization energy of H in units of hartrees (h), electron volts (eV), and kJ mol . We will need it later. [Pg.76]

Glasser, L. Pourier Transforms for Chemists Part 111. Pourier Transforms in Data Treatment, /. Chem. Educ. 1987, 64, A306-A313. [Pg.458]

Preprocessor. A device in a data-acquisition system that performs a significant amount of data reduction by extracting specific information from raw signal representations in advance of the main processing operation. A preprocessor can constitute the whole of a data-acquisition interface, in which case it must also perform the data-acquisition task (conversion of spectrometer signal to computer representation), or it can specialize solely in data treatment. [Pg.431]

The quahty of an analytical result also depends on the vaUdity of the sample utilized and the method chosen for data analysis. There are articles describiag Sampling and automated sample preparation (see Automated instrumentation) as well as articles emphasizing data treatment (see Chemometrics Computer technology), data iaterpretation (see Databases Imaging technology), and the communication of data within the laboratory or process system (see Expert systems Laboratory information managet nt systems). [Pg.393]

Compression-Permeability Tests Instead of model leaf tests, compression-permeabihty experiments may be substituted with advantage for appreciably compressible sohds. As in the case of constant-rate filtratiou, a single run provides data equivalent to those obtained from a series of constant-pressure runs, but it avoids the data-treatment complexity of constant-rate tests. [Pg.1706]

More detailea descriptions of small-scale sedimentation and filtration tests are presented in other parts of this section. Interpretation of the results and their conversion into preliminary estimates of such quantities as thickener size, centrifuge capacity, filter area, sludge density, cake diyness, and wash requirements also are discussed. Both the tests and the data treatment must be in experienced hands if error is to be avoided. [Pg.1751]

MULTISENSOR SYSTEMS ELECTRONIC TONGUE BASED ON LOW-SELECTIVE SENSORS AND MULTIWAY METHODS OF RECEIVED DATA TREATMENT... [Pg.19]

ROBUST AND UNBIASED ESTIMATIONS IN CHEMICAL DATA TREATMENT... [Pg.22]

As usually the statistical properties of the experimental data are poorly known, nobody can guarantee that the standard statistical procedures give the tmstworthy results. The history of chemical data treatment exemplifies impressively the persistent stmggle for obtaining more and more reliable meaningful information from the experimental data. [Pg.22]

The report is concentrated at a few procedures of data treatment that allow overcoming some drawbacks of standard statistical procedures. The main attention is paid to the problems of the regression analysis, especially to the Quantitative Stmcture-Activity Relationships (QSAR). [Pg.22]

The properties of the least squares (LS) method (5 = 0, the non-robust procedure) and the least modules (LM) one (5 = 100%, the robust procedure) are comprehensively compared with the use of several examples of data treatment in the QSAR problems. [Pg.22]

Individuals differ in their sensitivity to odor. Figure 14-7 shows a typical distribution of sensitivities to ethylsulfide vapor (17). There are currently no guidelines on inclusion or exclusion of individuals with abnormally high or low sensitivity. This variability of response complicates the data treatment procedure. In many instances, the goal is to determine some mean value for the threshold representative of the panel as a whole. The small size of panels (generally fewer than 10 people) and the distribution of individual sensitivities require sophisticated statistical procedures to find the threshold from the responses. [Pg.207]

Recent developments in Raman equipment has led to a considerable increase in sensitivity. This has enabled the monitoring of reactions of organic monolayers on glassy carbon [4.292] and diamond surfaces and analysis of the structure of Lang-muir-Blodgett monolayers without any enhancement effects. Although this unenhanced surface-Raman spectroscopy is expected to be applicable to a variety of technically or scientifically important surfaces and interfaces, it nevertheless requires careful optimization of the apparatus, data treatment, and sample preparation. [Pg.260]

We can reach two useful conclusions from the forms of these equations First, the plots of these integrated equations can be made with data on concentration ratios rather than absolute concentrations second, a first-order (or pseudo-first-order) rate constant can be evaluated without knowing any absolute concentration, whereas zero-order and second-order rate constants require for their evaluation knowledge of an absolute concentration at some point in the data treatment process. This second conclusion is obviously related to the units of the rate constants of the several orders. [Pg.34]

In 1950 French " and Wideqvist independently described a data treatment that makes use of the area under the concentration-time curve, and later authors have discussed the method.We introduce the technique by considering the second-order reaction of A and B, for which the differential rate equation is... [Pg.81]

One possibility is that the curvature is an artifact introduced by a systematic error in the measurements. This is not unlikely, because rate constants may vary by orders of magnitude over a wide temperature range, necessitating different analytical methods or data treatments in different temperature regions. Careful experimental work should be able to identify such problems. [Pg.251]

This reaction was studied spectrophotometrically by monitoring the absorbance at 830 nm, where Pu02+ absorbs. The paired values of time and absorbance are presented for one experiment in Table 2-4. Figure 2-5 shows the data treatment according to Eq. (2-35). Nonlinear least-squares analysis gives k = (9.49 0.22) X 102 L mol-1 s-1 and a calculated end point absorbance of 0.025 0.003. [Pg.25]

According to Eq. (3-7), a plot of In [A], - [AL will be linear. The plot has, as the negative of its slope, the sum k + k-. The implication that this data treatment yields a sum is at first surprising, because this rate constant characteristic of the equilibration is clearly larger than the forward rate constant alone. The net rate itself, on the other hand, is smaller than the forward rate, since the reverse rate is subtracted from it, as in Eq. (3-2). These statements are not contradictory, and they illustrate the need to distinguish between a rate and a rate constant. [Pg.47]

Application of equation 10 to the experimental D vs. [HSOIJ] data determined at 25°C and both 1 and 2 M acidity yielded straight line plots with slopes indistinguishable from zero and reproduced the Bi values determined in a non-linear regression fit of the data. This result implies no adsorption of PuSO by the resin and justifies use of the simpler data treatment represented by equation 2. A similar analysis of the Th(IV)-HSOiJ system done by Zielen (9) likewise produced results consistent with no adsorption of ThS0 + by Dowex AG50X12 resin. [Pg.256]

The RPA built-in data treatment extracts from the recorded torque signal 16 discrete values in order to calculate through a discrete FT (U.S. Patent 4,794,788) the real and imaginary components... [Pg.819]

FIGURE 30.1 Testing principle and built-in data treatment of the RPA. [Pg.819]

It is clear that this data treatment is strictly valid providing the tested material exhibits linear viscoelastic behavior, i.e., that the measured torque remains always proportional to the applied strain. In other words, when the applied strain is sinusoidal, so must remain the measured torque. The RPA built-in data treatment does not check this y(o )/S (o)) proportionality but a strain sweep test is the usual manner to verify the strain amplitude range for constant complex torque reading at fixed frequency (and constant temperature). [Pg.820]

According to strain sweep test protocols described above, RPA-FT experiments and data treatment yield essentially two types of information, which reflects how the main torque component, i.e., r(l[Pg.829]

In summary, the FT rheometry protocols described above and the associated data treatment yield a considerable number of information about the viscoelastic character of materials. Within less than 1 h, two samples are tested and the full data treatment is performed (using the present combination of VBA macros and MathCad routines). [Pg.830]


See other pages where Data treatment is mentioned: [Pg.1384]    [Pg.1384]    [Pg.1417]    [Pg.198]    [Pg.199]    [Pg.366]    [Pg.128]    [Pg.389]    [Pg.105]    [Pg.333]    [Pg.460]    [Pg.820]    [Pg.824]    [Pg.825]    [Pg.826]    [Pg.828]    [Pg.829]    [Pg.847]    [Pg.848]    [Pg.93]    [Pg.145]    [Pg.145]    [Pg.147]   
See also in sourсe #XX -- [ Pg.186 , Pg.190 , Pg.192 ]

See also in sourсe #XX -- [ Pg.265 ]

See also in sourсe #XX -- [ Pg.404 ]




SEARCH



Automation in the acquisition and treatment of spectroscopic data

Comparing treatments for continuous data

Contents 3 Data treatment

Data Collection and Computer Treatment

Data Recording and Treatment

Data Treatment Strategy

Data Treatment and Modeling

Data acquisition and treatment

Data pre-treatment

Data treatment Experimental error

Data treatment Rate estimation

Data treatment Smoothing methods

Electrical conductivity data treatment

Electrochemical potential data treatment

Enzyme kinetic data, treatments

Error and Treatment of Data

Fluorescence enhancement data treatment

Industrial data metallurgical offgas treatment

Measurement and data treatment

Measures of treatment benefit for categorical and ordinal data

Nonlinear Least Square Data Treatment of NMR Titration Method

Photoemission of Adsorbates Data Treatment

Programs, data treatment

Statistical Data Treatment and Evaluation

Statistical Treatment of Data

Statistical treatment of free sorting data

TREATMENT OF EXPERIMENTAL DATA

The Mathematical Treatment of Low-Pressure VLE Data

The Treatment of Experimental Data

Time-dependent Data Treatment

Treatment Episode Data Set

Treatment effects/differences survival data

Treatment of Data

Treatment of Data General Equation and Zimm Plot

Treatment of Dilute Solution Data

Treatment of Intrinsic Viscosity Data

Treatment of Missing Data

Treatment of Rheological Data Using Models

Treatment of adsorption data

Treatment of the data from a single run

Treatment randomization data

Unobserved data, treatment

© 2024 chempedia.info