Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Validating the Analysis

The SDWA states that the USEPA must decide whether or not to regulate at least five different contaminants every 5 years, and that every 5 years they must publish a list of chemicals from which these contaminants are to be selected. This list is known as the Drinking Water Contaminant Candidate List and is subject to public review and comment (Pontius, 1998). To validate the analysis of the USEPA, the SDWA requires that they consult with the Science Advisory Board. Even though the USEPA must first select chemicals to review that pose the greatest threat to health, the SDWA allows the regulation of a chemical without scientific proof of danger should there be a valid health threat. However, every analysis must consider a benefit/cost study before regulation of a contaminant occurs. [Pg.26]

To validate the analysis methodology used here for general spent fuel applications, it is desirable to perform calculations for configurations that are close to those expected in AFR scenarios. Thus, the burnup, fraction of spent fuel, and downtime were important factors in the selection of these reactor configurations as proposed benchmarks for burnup credit applications. The Sequoyah benchmarks were selected primarily for the MOC core, which at the time of startup had experienced a 2.7-year downtime and consisted of a core completely comprised of burned fuel. The other two Sequoyah configurations were evaluated because the data were readily available as a test of consistency with the MOC case. The TMI benchmark was selected for similar reasons. The core consisted primarily of burned fuel, with all fresh fuel located at the core periphery, where its importance is diminished. Startup occurred after an especially long downtime of 6.6 years. The Surry and North Anna benchmarks, on the other hand, were performed as a comparison with earlier... [Pg.26]

The simulation results obtained were reported in terms of polarization maps, which were developed in order for CPC to be read directly and used under the considered operating conditions to estimate the transmembrane flux once the intrinsic membrane permeance and Sieverts driving force of bulk are known (eqn (14.12)). The results from the model solution have been confirmed by a comparison with the experimental ones in order to validate the analysis ... [Pg.147]

Detailed Monte Carlo studies were needed to prepare for the data analysis, with particular attention being given to a reliable Monte Carlo prediction of the p f variable. Appropriate techniques to validate the analysis method using a data-driven approach had to be developed, which also illustrates the challenges one faces at the LHC to perform this analysis. [Pg.13]

The raw data collected during the experiment are then analyzed. Frequently the data must be reduced or transformed to a more readily analyzable form. A statistical treatment of the data is used to evaluate the accuracy and precision of the analysis and to validate the procedure. These results are compared with the criteria established during the design of the experiment, and then the design is reconsidered, additional experimental trials are run, or a solution to the problem is proposed. When a solution is proposed, the results are subject to an external evaluation that may result in a new problem and the beginning of a new analytical cycle. [Pg.6]

Equations 13.31 and 13.32 are only valid if the radioactive element in the tracer has a half-life that is considerably longer than the time needed to conduct the analysis. If this is not the case, then the decrease in activity is due both to the effect of dilution and the natural decrease in the isotope s activity. Some common radioactive isotopes for use in isotope dilution are listed in Table 13.1. [Pg.647]

Our approach to the problem of gelation proceeds through two stages First we consider the probability that AA and BB polymerize until all chain segments are capped by an Aj- monomer then we consider the probability that these are connected together to form a network. The actual molecular processes occur at random and not in this sequence, but mathematical analysis is feasible if we consider the process in stages. As long as the same sort of structure results from both the random and the subdivided processes, the analysis is valid. [Pg.316]

Suppose we have two methods of preparing some product and we wish to see which treatment is best. When there are only two treatments, then the sampling analysis discussed in the section Two-Population Test of Hypothesis for Means can be used to deduce if the means of the two treatments differ significantly. When there are more treatments, the analysis is more detailed. Suppose the experimental results are arranged as shown in the table several measurements for each treatment. The goal is to see if the treatments differ significantly from each other that is, whether their means are different when the samples have the same variance. The hypothesis is that the treatments are all the same, and the null hypothesis is that they are different. The statistical validity of the hypothesis is determined by an analysis of variance. [Pg.506]

A usual, but not always valid, assumption about fj is fj(Q) = CjQ. A great deal of the literature is devoted to the analysis of this Hamiltonian, both classical and quantum mechanical. [Pg.79]

The measure of assembly variability, q, derived from the analysis should be used as a relative performance indicator for each design evaluated. The design with the least potential variability problems or least failure cost should be chosen for further development. The indices should not be taken as absolutes as assembly variability is difficult to measure and validate. [Pg.63]

Only a small amount of work has been done up to now concerning the prediction of bond strengths and other properties based on the results of the analysis of the resin. Ferg et al. [59] worked out correlation equations evaluating the chemical structures in various UF-resins with different F/U molar ratios and different types of preparation on the one hand and the achievable internal bond as well as the subsequent formaldehyde emission on the other hand. These equations are valid only for well defined series of resins. The basic aim of such experiments is the prediction of the properties of the wood-based panels based on the composition and the properties of the resins used. For this purpose various structural components are determined by means of - C NMR and their ratios related to board results. Various papers in the chemical literature describe examples of such correlations, in particular for UF, MF, MUF and PF resins [59-62]. For example one type of equation correlating the dry internal bond (IB) strength (tensile strength perpendicular to the plane of the panel) of a particleboard bonded with PF adhesive resins is as follows [17]... [Pg.1053]

This equation may also be used to calculate the wall thickness distribution in deep truncated cone shapes but note that its derivation is only valid up to the point when the spherical bubble touches the centre of the base. Thereafter the analysis involves a volume balance with freezing-off on the base and sides of the cone. [Pg.312]

Having established that these assumptions are reasonable, we need to consider the relationship between the parameters of the actual offset jet and the equivalent wall jet that will produce the same (or very similar) flow far downstream of the nozzle. It can be shown that the ratio of the initial kinematic momentum per unit length of nozzle of the wall jet to the offset jet,, and the ratio of the two nozzle heights,, depend on the ratio D/B, where D is the offset distance betw een the jet nozzle and the surface of the tank, and h, is the nozzle height of the offset jet. The relationship, which because of the assumptions made in the analysis is not valid at small values of D/hj, is shown in Fig 10.72. [Pg.947]

The nature of the preceding analysis does not permit the application of the technique to design of local capture hoods but rather to the design of remote or canopy fume hoods. For this approach to be valid, the hoods must usually be at least two source diameters away from the emission source. [Pg.1271]

In order to validate the predietions of the theoretieal analysis based on the SFM, Zauner and Jones (2000b) studied the effeet of reaetor seale (eapaeity) on the preeipitation of ealeium oxalate obtained from reaeting supersaturated solutions of ealeium ehloride CaCl2 and sodium oxalate Na2C204. The geometries of the 4.3 1 and 12 1 preeipitation reaetors are shown in Figure 8.2 with the experimental set-up shown in Figure 8.3. [Pg.221]


See other pages where Validating the Analysis is mentioned: [Pg.50]    [Pg.28]    [Pg.31]    [Pg.627]    [Pg.34]    [Pg.70]    [Pg.8]    [Pg.182]    [Pg.253]    [Pg.134]    [Pg.103]    [Pg.185]    [Pg.27]    [Pg.50]    [Pg.28]    [Pg.31]    [Pg.627]    [Pg.34]    [Pg.70]    [Pg.8]    [Pg.182]    [Pg.253]    [Pg.134]    [Pg.103]    [Pg.185]    [Pg.27]    [Pg.44]    [Pg.882]    [Pg.574]    [Pg.686]    [Pg.855]    [Pg.491]    [Pg.145]    [Pg.2]    [Pg.6]    [Pg.45]    [Pg.225]    [Pg.398]    [Pg.687]    [Pg.718]    [Pg.2548]    [Pg.290]    [Pg.38]    [Pg.409]    [Pg.89]    [Pg.291]    [Pg.802]    [Pg.802]   


SEARCH



© 2024 chempedia.info