Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Sample Setting

The values of x and s vary from sample set to sample set. However, as N increases, they may be expected to become more and more stable. Their limiting values, for very large N, are numbers characteristic of the frequency distribution, and are referred to as the population mean and the population variance, respectively. [Pg.192]

The standard deviation cr may be estimated by calculating the standard deviation 5- drawn from a small sample set as follows ... [Pg.197]

A method of resolution that makes a very few a priori assumptions is based on principal components analysis. The various forms of this approach are based on the self-modeling curve resolution developed in 1971 (55). The method requites a data matrix comprised of spectroscopic scans obtained from a two-component system in which the concentrations of the components are varying over the sample set. Such a data matrix could be obtained, for example, from a chromatographic analysis where spectroscopic scans are obtained at several points in time as an overlapped peak elutes from the column. [Pg.429]

The statistical measures can be calculated using most scientific calculators, but confusion can arise if the calculator offers the choice between dividing the sum of squares by N or by W — 1 . If the object is to simply calculate the variance of a set of data, divide by N . If, on the other hand, a sample set of data is being used to estimate the properties of a supposed population, division of the sum of squares by W — r gives a better estimate of the population variance. The reason is that the sample mean is unlikely to coincide exactly with the (unknown) true population mean and so the sum of squares about the sample mean will be less than the true sum of squares about the population mean. This is compensated for by using the divisor W — 1 . Obviously, this becomes important with smaller samples. [Pg.278]

Are the acceptance criteria for attribute data sampling set at zero defects ... [Pg.82]

Responses to the agonist are obtained in the absence and presence of a range of concentrations of antagonist. A sample set of data is given in Table 12.13 and Figure 12.14a. [Pg.273]

In order for the inverse of [C CT] to exist, C must have at least as many columns as rows. Since C has one row for each component and one column for each sample, this means that we must have at least as many samples as components in order to be able to compute equation [33]. This would certainly seem to be a reasonable constraint. Also, if there is any linear dependence among the rows or columns of C, [C CT] will be singular and its inverse will not exist. One of the most common ways of introducing linear dependency is to construct a sample set by serial dilution. [Pg.52]

The Predicted Residual Error Sum of Squares (PRESS) is simply the sum of the squares of all the errors of all of the samples in a sample set. [Pg.168]

Let us now examine sample sets of data. We shall consider two reactions, the formation of a biradical1 [Eq. (7-10)] and an electron transfer reaction between two ruthenium complexes [Eq. (7-11)], in which LN represent nitrogen-donor ligands specified in the original reference.2 The chemical equations are... [Pg.157]

Two possibilities for the observed enriched values for many of the grazers, and the often concomitant but slight enrichment of browsers (Fig. 5.6), present themselves. Either these shifts represent atmospheric CO2 enrichment shifts at particular periods during the last -lOOKa represented (the Late Pleistocene), or a hitherto unknown or unrecognized fractionation process has taken place during burial and fossilization. The former hypothesis could be tested by comparison of observations from a larger sample set from the site, with CO2 concentration and carbon isotope data from Antarctic ice-core records or high resolution marine isotope records. [Pg.106]

Torkelson and coworkers [274,275] have developed kinetic models to describe the formation of gels in free-radical pol5nnerization. They have incorporated diffusion limitations into the kinetic coefficient for radical termination and have compared their simulations to experimental results on methyl methacrylate polymerization. A basic kinetic model with initiation, propagation, and termination steps, including the diffusion hmitations, was found to describe the gelation effect, or time for gel formation, of several samples sets of experimental data. [Pg.559]

Let us suppose that dust particles have been collected in the air above a city and that the amounts of p constituents, e.g. Si, Al, Ca,..., Pb have been determined in these samples.The elemental compositions obtained for n (e.g. 100) samples, taken over a grid of sampling points, can be arranged in a data matrix X (Fig. 34.1). Each row of the table represents the elemental composition of one of the samples. A column represents the amount of one of the elements found in the sample set. Let us further suppose that there are two main sources of dust in the neighbourhood of the sampled area, and that the particles originating from each source have a specific concentration pattern for the elements Si to Pb. These concentration patterns are described by the vectors s, and Sj. For instance the dust in the air may originate from a power station and from an incinerator, having each a specific concentration pattern, sj = [Si, Al, Ca , ... PbJ with k = 1,2. [Pg.243]

In 1978, Ho et al. [33] published an algorithm for rank annihilation factor analysis. The procedure requires two bilinear data sets, a calibration standard set Xj and a sample set X . The calibration set is obtained by measuring a standard mixture which contains known amounts of the analytes of interest. The sample set contains the measurements of the sample in which the analytes have to be quantified. Let us assume that we are only interested in one analyte. By a PCA we obtain the rank R of the data matrix X which is theoretically equal to 1 + n, where rt is the number of interfering compounds. Because the calibration set contains only one compound, its rank R is equal to one. [Pg.298]

The general sample set for method validation parameters is the same for all matrices under consideration (except body fluids and tissues, see Section 4.2.5) ... [Pg.28]

The second requirement is that enforcement methods for food must be validated by an independent laboratory [independent laboratory validation (ILV)]. The sample set is identical with the general sample set (see Section 4.1). If the method is identical for all four crop groups (mentioned at the beginning of the section), it may be sufficient to perform the ILV for plant materials with a minimum of two matrices, one of them with a high water content. In the case of food of animal origin, the ILV should be performed with at least two of the matrices milk, egg, meat, and, if appropriate, fat. [Pg.30]

The sample set must include two fortification levels appropriate to the proposed LOQ and likely residue levels or 10 times the LOQ, except for body fluids and tissues (considered in Section 5.2.3) where validation data at the LOQ are sufficient. Five determinations should be made at each fortification level. In general, mean... [Pg.33]

In contrast to the requirements for enforcement methods and to ensure sufficient quality of the generated data, validation data should be submitted for all types of crop samples to be analyzed. However, matrix comparability and a reduced validation data set may be considered where two or more very similar matrices are to be analyzed (e.g., cereal grain). A reduced sample set may also be acceptable (two levels, at least three determinations and an assessment of matrix interference) provided that the investigated samples belong to the same crop group as described in SANCO/825/00 (see also Section 4.2.1). [Pg.34]

For confirmatory procedures, the fortified sample sets at half and twice the tolerance... [Pg.84]

Each of the three laboratories analyzes the same sample sets that the developer was required to analyze during method development ... [Pg.90]

Occasionally the complete sample set of an individual commodity was not analyzed within a validation study. This is not a problem if the same study provides data on additional commodities belonging to the same matrix group. Consequently, the missing data, e.g., a second concentration level, are replaced, provided that control sample results are presented for all crops. [Pg.107]

Seven standards are prepared from the above HEMA/EMA standard solutions typically with every sample set. The preparation of the detector calibration standards, in this way, accounts for the completeness of the HEMA conversion to MEMA. [Pg.355]

A new nonweighted linear calibration curve is to be generated with every set of samples analyzed. The calibration standards are included in the analytical sample set, as the set is injected into the GC system, preferably with a standard between every two analytical samples. [Pg.367]

Detector calibration. A calibration curve is generated for every set of samples with a minimum of five standard levels. The standards are interspersed among the analytical samples of each set. The first and last sample in each analytical sample set must be a standard. [Pg.383]

Concentrations of terbacil and its Metabolites A, B and C are calculated from a calibration curve for each analyte run concurrently with each sample set. The equation of the line based on the peak height of the standard versus nanograms injected is generated by least-squares linear regression analysis performed using Microsoft Excel. [Pg.582]

The use of a matrix blank is the simplest way to overcome a matrix effect, but the analyst must ensure that the matrix blank is uniform and does not change between sample sets. Acquiring a uniform blank matrix may be problematic if assays are conducted over an extended time period. Caution must be taken when an analysis, validated for one matrix or species, is used for a different tissue or species, because... [Pg.684]

Another consideration when planning field fortification levels for the matrices is the lowest level for fortification. The low-level fortification samples should be set high enough above the limit of quantitation (LOQ) of the analyte so as to ensure that inadvertent field contamination does not add to and does not drive up the field recovery of the low-fortification samples. Setting the low field fortification level too low will lead to unacceptably high levels of the analyte in low field spike matrix samples if inadvertent aerial drift or pesticide transport occurs in and around where the field fortification samples are located. Such inadvertent aerial drift or transport is extremely hard to avoid since wind shifts and temperature inversions commonly occur during mixer-loader/re-entry exposure studies. [Pg.1009]

Statistical Analyses of Liquefaction Data for a Large Sample Set... [Pg.22]

The raw data in the more comprehensive study (61) were conversions, determined in duplicate, when each of 104 coals selected from three geological provinces was heated with tetralin under standard conditions, together with the results of 14 commonly made analytical determinations for each coal. An early observation in this study was that when data for all 104 samples were plotted against volatile matter, a steady decrease of conversion with decreasing volatile matter was apparent. But there was a great deal of scatter (r=0.85). In any case, the formal requirements that make possible the employment of valid statistical analyses were not met by the data matrix, as evidenced by skewed and bimodal relationships between the variables the sample set was heterogeneous. ... [Pg.22]


See other pages where Sample Setting is mentioned: [Pg.107]    [Pg.47]    [Pg.292]    [Pg.148]    [Pg.30]    [Pg.73]    [Pg.98]    [Pg.103]    [Pg.7]    [Pg.420]    [Pg.421]    [Pg.84]    [Pg.92]    [Pg.101]    [Pg.374]    [Pg.647]    [Pg.1152]    [Pg.273]    [Pg.58]    [Pg.187]    [Pg.186]    [Pg.83]    [Pg.22]   


SEARCH



© 2024 chempedia.info