Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Chemical analysis errors

Table IV shows the results of the tests in detail. Test-1 was done in two different temperature ranges, 1250-1300°C for the first 30 minutes and 1400-1430°C for the second half of the test. The zinc content in the Fe-S-0 matte increased as the test proceeded, until at 30 minutes, it reached its maximum, 1.37 wt%, which is considered the saturation value. In the second half of the test, temperature was increased to 1410-1430 C and the zinc solubility decreased to -0.5 wt% which basically is the same as that present in the feed. It is worth noting that the zinc concentration in the matte before the test should be very small because there was no zinc added to the system at the begiiming. The reported value might be due to a chemical analysis error. If this were true, the zinc recovery into the gas phase was almost 100% for the second half of the test at a temperature >1400°C. Figure 8 shows the changes in zinc solubility during the test. Table IV shows the results of the tests in detail. Test-1 was done in two different temperature ranges, 1250-1300°C for the first 30 minutes and 1400-1430°C for the second half of the test. The zinc content in the Fe-S-0 matte increased as the test proceeded, until at 30 minutes, it reached its maximum, 1.37 wt%, which is considered the saturation value. In the second half of the test, temperature was increased to 1410-1430 C and the zinc solubility decreased to -0.5 wt% which basically is the same as that present in the feed. It is worth noting that the zinc concentration in the matte before the test should be very small because there was no zinc added to the system at the begiiming. The reported value might be due to a chemical analysis error. If this were true, the zinc recovery into the gas phase was almost 100% for the second half of the test at a temperature >1400°C. Figure 8 shows the changes in zinc solubility during the test.
Chemical analysis of the metal can serve various purposes. For the determination of the metal-alloy composition, a variety of techniques has been used. In the past, wet-chemical analysis was often employed, but the significant size of the sample needed was a primary drawback. Nondestmctive, energy-dispersive x-ray fluorescence spectrometry is often used when no high precision is needed. However, this technique only allows a surface analysis, and significant surface phenomena such as preferential enrichments and depletions, which often occur in objects having a burial history, can cause serious errors. For more precise quantitative analyses samples have to be removed from below the surface to be analyzed by means of atomic absorption (82), spectrographic techniques (78,83), etc. [Pg.421]

K Eckschlager Errors, Measurements and Results in Chemical Analysis, Van Nostrand Reinhold, London 1969... [Pg.156]

Part A, dealing with the Fundamentals of Quantitative Chemical Analysis, has been extended to incorporate sections of basic theory which were originally spread around the body of the text. This has enabled a more logical development of theoretical concepts to be possible. Part B, concerned with errors, statistics, and sampling, has been extensively rewritten to cover modern approaches to sampling as well as the attendant difficulties in obtaining representative samples from bulk materials. The statistics has been restructured to provide a logical, stepwise approach to a subject which many people find difficult. [Pg.903]

Chemical analysis of composite systems is often severely restricted by the invalidity of the co-existence principle (although there are a few cases known in which estimations are made possible just because of the occurrence of chemical induction). Therefore, many efforts have been directed at exploring at least qualitatively the source of errors caused by induced reactions. That is why our present knowledge about such reactions is rather qualitative in nature. [Pg.519]

The choice of a chemical analysis procedure is also part of the design We will not discuss the additional difficulties the chemical analysis procedure can introduce In this paper, we assume that the chemical analysis procedure has adequate sensitivity and relatively small error ... [Pg.121]

Quantitative analysis of multicomponent additive packages in polymers is difficult subject matter, as evidenced by results of round-robins [110,118,119]. Sample inhomogeneity is often greater than the error in analysis. In procedures entailing extraction/chromatography, the main uncertainty lies in the extraction stage. Chromatographic methods have become a ubiquitous part of quantitative chemical analysis. Dissolution procedures (without precipitation) lead to the most reliable quantitative results, provided that total dissolution can be achieved follow-up SEC-GC is molecular mass-limited by the requirements of GC. Of the various solid-state procedures (Table 10.27), only TG, SHS, and eventually Py, lead to easily obtainable accurate quantitation. [Pg.739]

We will begin by taking a look at the detailed aspects of a basic problem that confronts most analytical laboratories. This is the problem of comparing two quantitative methods performed by different operators or at different locations. This is an area that is not restricted to spectroscopic analysis many of the concepts we describe here can be applied to evaluating the results from any form of chemical analysis. In our case we will examine a comparison of two standard methods to determine precision, accuracy, and systematic errors (bias) for each of the methods and laboratories involved in an analytical test. As it happens, in the case we use for our example, one of the analytical methods is spectroscopic and the other is an HPLC method. [Pg.167]

One common characteristic of many advanced scientific techniques, as indicated in Table 2, is that they are applied at the measurement frontier, where the net signal (S) is comparable to the residual background or blank (B) effect. The problem is compounded because (a) one or a few measurements are generally relied upon to estimate the blank—especially when samples are costly or difficult to obtain, and (b) the uncertainty associated with the observed blank is assumed normal and random and calculated either from counting statistics or replication with just a few degrees of freedom. (The disastrous consequences which may follow such naive faith in the stability of the blank are nowhere better illustrated than in trace chemical analysis, where S B is often the rule [10].) For radioactivity (or mass spectrometric) counting techniques it can be shown that the smallest detectable non-Poisson random error component is approximately 6, where ... [Pg.168]

Are the equilibrium constants for the important reactions in the thermodynamic dataset sufficiently accurate The collection of thermodynamic data is subject to error in the experiment, chemical analysis, and interpretation of the experimental results. Error margins, however, are seldom reported and never seem to appear in data compilations. Compiled data, furthermore, have generally been extrapolated from the temperature of measurement to that of interest (e.g., Helgeson, 1969). The stabilities of many aqueous species have been determined only at room temperature, for example, and mineral solubilities many times are measured at high temperatures where reactions approach equilibrium most rapidly. Evaluating the stabilities and sometimes even the stoichiometries of complex species is especially difficult and prone to inaccuracy. [Pg.24]

Most importantly, has the modeler conceptualized the reaction process correctly The modeler defines a reaction process on the basis of a concept of how the process occurs in nature. Many times the apparent failure of a calculation indicates a flawed concept of how the reaction occurs rather than error in a chemical analysis or the thermodynamic data. The failed calculation, in this case, is more useful than a successful one because it points out a basic error in the modeler s understanding. [Pg.26]

In this case, a likely explanation for the apparent supersaturation is that the chemical analysis included not only dissolved aluminum and iron, but also a certain amount of aluminum and iron suspended in the water as colloids and fine sediments. Analytical error of this type occurs because the standard sampling procedure calls for passing the sample through a rather coarse filter of 0.45 tun p0re size and then adding acid to preserve it during transport and storage. [Pg.95]

Modern-day chemical analysis can involve very complicated material samples—complicated in the sense that there can be many substances present in the sample, creating a myriad of problems with interferences when the lab worker attempts the analysis. These interferences can manifest themselves in a number of ways. The kind of interference that is most famihar is one in which substances other than the analyte generate an instrumental readout similar to the analyte, such that the interference adds to the readout of the analyte, creating an error. However, an interference can also suppress the readout for the analyte (e.g., by reacting with the analyte). An interference present in a chemical to be used as a standard (such as a primary standard) would cause an error, unless its presence and concentration were known (determinant error, or bias). Analytical chemists must deal with these problems, and chemical procedures designed to effect separations or purification are now commonplace. [Pg.299]

The numerous uncertainties usually encountered in a chemical analysis give rise to a host of errors that may be broadly categorised into two heads, namely ... [Pg.72]

Part—I has three chapters that exclusively deal with General Aspects of pharmaceutical analysis. Chapter 1 focuses on the pharmaceutical chemicals and their respective purity and management. Critical information with regard to description of the finished product, sampling procedures, bioavailability, identification tests, physical constants and miscellaneous characteristics, such as ash values, loss on drying, clarity and color of solution, specific tests, limit tests of metallic and non-metallic impurities, limits of moisture content, volatile and non-volatile matter and lastly residue on ignition have also been dealt with. Each section provides adequate procedural details supported by ample typical examples from the Official Compendia. Chapter 2 embraces the theory and technique of quantitative analysis with specific emphasis on volumetric analysis, volumetric apparatus, their specifications, standardization and utility. It also includes biomedical analytical chemistry, colorimetric assays, theory and assay of biochemicals, such as urea, bilirubin, cholesterol and enzymatic assays, such as alkaline phosphatase, lactate dehydrogenase, salient features of radioimmunoassay and automated methods of chemical analysis. Chapter 3 provides special emphasis on errors in pharmaceutical analysis and their statistical validation. The first aspect is related to errors in pharmaceutical analysis and embodies classification of errors, accuracy, precision and makes... [Pg.539]

Statistics have been used in chemical analysis in increasing amounts to quantify errors. The focus shifts now to other areas, such as in sampling and in measurement calibrations. Statistical and computer methods can be brought into use to give a quantified amount of error and to clarify complex mixture problems. These areas are a part of chemometrics as we use the term today. [Pg.291]

If the oral route is to be used, the chemical may be mixed with the diet, dissolved in drinking water, or delivered by a tube to the stomach (gavage). An inhalation exposure requires special equipment to create the desired concentration of the chemical in the air to be breathed by the animal. In any case, the analytical chemist must be called on to measure the amount of the chemical in these various media after it has been added to guarantee that the dose is known with accuracy. Some chemicals decompose relatively quickly, or errors are made in weighing or mixing the chemical to achieve the desired diet, water, or air concentrations, so chemical analysis of these media is essential throughout the study. [Pg.82]

Tetralin at 60°C., higher than reported earlier (0.06 and 0.04) at 90°C. (17), and styrene 1 to 3 times as reactive as Tetralin, depending on the peroxy radical. Mayo et ad. (12) found cumene 0.22 and 0.13 as reactive as Tetralin and styrene 2.3 times as reactive. Alagy, Clement, and Balaceanu found cumene 0.1 as reactive as Tetralin, using a mathematical instead of a chemical analysis (1). Howard and Ingold found cumene 0.45, 0.48, and 0.50 times as reactive as Tetralin toward the cumyl, tetralyl, and dihydroanthracylperoxy radicals, respectively (8). These numbers show qualitative agreement whether the differences are caused by the reactivity of the various peroxy radicals or by experimental error is discussed by Mayo, Syz, Mill, and Castleman (12). [Pg.39]

E.B. Sandel, Errors in Chemical Analysis , in Treatise of Analytical Chemistry, I. M. KolthofFand P. J. Elving, eds., Part 1, Vol. 1, John Wiley, New York, 1959. [Pg.81]


See other pages where Chemical analysis errors is mentioned: [Pg.5]    [Pg.5]    [Pg.5]    [Pg.36]    [Pg.320]    [Pg.53]    [Pg.348]    [Pg.23]    [Pg.223]    [Pg.25]    [Pg.23]    [Pg.342]    [Pg.317]    [Pg.287]    [Pg.294]    [Pg.437]    [Pg.151]    [Pg.76]    [Pg.255]    [Pg.68]    [Pg.123]    [Pg.2]    [Pg.28]    [Pg.112]    [Pg.334]    [Pg.24]    [Pg.287]   
See also in sourсe #XX -- [ Pg.90 , Pg.91 , Pg.92 , Pg.93 , Pg.94 , Pg.95 , Pg.96 , Pg.97 , Pg.98 , Pg.99 , Pg.100 , Pg.101 , Pg.102 ]




SEARCH



Error analysis

Errors chemical

© 2024 chempedia.info