Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data analysis calculation

Toxicity data analysis [calculation percent of inhibition (%l), calculation LID10, calculation toxicity units (TU=(1/LIDio)x100)]... [Pg.37]

Establish loads for segments identified, including water quality monitoring, modeling, data analysis, calculation methods, and the list of pollutants to be regulated ... [Pg.443]

The data analysis module of ELECTRAS is twofold. One part was designed for general statistical data analysis of numerical data. The second part offers a module For analyzing chemical data. The difference between the two modules is that the module for mere statistics applies the stati.stical methods or rieural networks directly to the input data while the module for chemical data analysis also contains methods for the calculation ol descriptors for chemical structures (cl. Chapter 8) Descriptors, and thus structure codes, are calculated for the input structures and then the statistical methods and neural networks can be applied to the codes. [Pg.450]

The following texts provide additional information about ANOVA calculations, including discussions of two-way analysis of variance. Graham, R. C. Data Analysis for the Chemical Sciences. VCH Publishers New York, 1993. [Pg.704]

The endpoint value for any changing concentration, such as [A ], sometimes referred to as the infinity point, is extremely important in the data analysis, particularly when the order of the reaction is not certain. The obvious way to determine it, ie, by allowing the reaction to proceed for a long time, is not always rehable. It is possible for secondary reactions to interfere. It may sometimes be better to calculate the endpoint from a knowledge of the... [Pg.508]

The comparison with experiment can be made at several levels. The first, and most common, is in the comparison of derived quantities that are not directly measurable, for example, a set of average crystal coordinates or a diffusion constant. A comparison at this level is convenient in that the quantities involved describe directly the structure and dynamics of the system. However, the obtainment of these quantities, from experiment and/or simulation, may require approximation and model-dependent data analysis. For example, to obtain experimentally a set of average crystallographic coordinates, a physical model to interpret an electron density map must be imposed. To avoid these problems the comparison can be made at the level of the measured quantities themselves, such as diffraction intensities or dynamic structure factors. A comparison at this level still involves some approximation. For example, background corrections have to made in the experimental data reduction. However, fewer approximations are necessary for the structure and dynamics of the sample itself, and comparison with experiment is normally more direct. This approach requires a little more work on the part of the computer simulation team, because methods for calculating experimental intensities from simulation configurations must be developed. The comparisons made here are of experimentally measurable quantities. [Pg.238]

Four general classes of HRA methods irc. (I) expert judgment, (2) performance process simulation, (3) performance data analysis, and (4) dependency calculations, ri.ese classes rue encompassed in the ten methods many of which contain multiple dassc.s of the methods. No attempt is made to dassity them according to the methods. [Pg.176]

In practice the finite-field calculation is not so simple because the higher-order terms in the induced dipole and the interaction energy are not negligible. Normally we use a number of applied fields along each axis, typically multiples of 10 " a.u., and use the standard techniques of numerical analysis to extract the required data. Such calculations are not particularly accurate, because they use numerical methods to find differentials. [Pg.289]

By means of this expression, the values of Yt yield [A]f, and Eq. (3-28) provides the means for data analysis. Or, with additional algebra, one can express Y, directly, and float both k] and Ye in the calculation. As an example of the application of Eq. (3-28), consider the dimer-monomer equilibration of triphenyl methyl radical 2... [Pg.51]

Calculations—See Data analysis Carbamoylmethylphosphoryl derivatives as actinide... [Pg.456]

Data analysis routines may change with time, and it is desirable to be able to reanalyze old data with new analysis software. Our tensile test analysis software creates plots of engineering stress as a function of engineering strain, as illustrated in Figure 3. Our flexure test software plots maximum fiber stress as a function of maximum fiber strain, with the option of including Poisson s ratio in the calculations. Both routines generate printed reports which present the test results in tabular form, as illustrated in Figure 4. [Pg.50]

The first term is related to the van der Waals interaction, with A being the Hamaker constant. The second term includes other forces that decay exponentially with distance. As discussed, these may include double-layer, solvation, and hydration forces. In our data analysis, B and C were used as fitting variables the Hamaker constant A was calculated using Lifshitz theory [6]. [Pg.254]

Within the pharmaceutical industry we have progressed from the point where computers in the laboratory were rarely present or used beyond spreadsheet calculations. Now computers are ubiquitous in pharmaceutical research and development laboratories, and nearly everyone has at least one used in some way to aid in his or her role. It should come as no surprise that the development of hardware and software over the last 30 years has expanded the scope of computer use to virtually all stages of pharmaceutical research and development (data analysis, data capture, monitoring and decision making). Although there are many excellent books published that are focused on in-depth discussions of computer-aided drug design, bioinformatics, or other related individual topics, none has addressed this broader utilization of... [Pg.831]

The conclusion that highly vibrationally excited H2 correlated with low-7 CO represents a new mechanistic pathway, and the elucidation of that pathway, is greatly facilitated by comparison with quasiclassical trajectory calculations of Bowman and co-workers [8, 53] performed on a PES fit to high level electronic structure calculations [54]. The correlated H2 / CO state distributions from these trajectories, shown as the dashed lines in Fig. 8, show reasonably good agreement with the data. Analysis of the trajectories confirms that the H2(v = 0—4) population represents dissociation over the skewed transition state, as expected. [Pg.239]

The ESR measurements were made at RT or 77 K on a Varian E-9 spectrometer (X-band), equipped with an on-line computer for data analysis. Spin-Hamiltonian parameters (g and A values) were obtained from calculated spectra using the program SIM14 A [26]. The absolute concentration of the paramagnetic species was determined from the integrated area of the spectra. Values of g were determined using as reference the sharp peak at g = 2.0008 of the E i center (marked with an asterisk in Fig. 3) the center was formed by UV irradiation of the silica dewar used as sample holder. [Pg.692]

The data summarization procedures will depend on the objectives and type of data. Statistical calculations should be supported with graphical analysis techniques. A statement of precision and bias should be Included with all Important results of the study. [Pg.83]

In the context of data analysis we divide by n rather than by (n - 1) in the calculation of the variance. This procedure is also called autoscaling. It can be verified in Table 31.5 how these transformed data are derived from those of Table 31.4. [Pg.122]

Sets of spectroscopic data (IR, MS, NMR, UV-Vis) or other data are often subjected to one of the multivariate methods discussed in this book. One of the issues in this type of calculations is the reduction of the number variables by selecting a set of variables to be included in the data analysis. The opinion is gaining support that a selection of variables prior to the data analysis improves the results. For instance, variables which are little or not correlated to the property to be modeled are disregarded. Another approach is to compress all variables in a few features, e.g. by a principal components analysis (see Section 31.1). This is called... [Pg.550]

In practice when reservoir parameters such as porosities and permeabilities are estimated by matching reservoir model calculated values to field data, one has some prior information about the parameter values. For example, porosity and permeability values may be available from core data analysis and well test analysis. In addiction, the parameter values are known to be within certain bounds for a particular area. All this information can be incorporated in the estimation method of the simulator by introducing prior parameter distributions and by imposing constraints on the parameters (Tan and Kalogerakis, 1993). [Pg.381]

In PAMPA measurements each well is usually a one-point-in-time (single-timepoint) sample. By contrast, in the conventional multitimepoint Caco-2 assay, the acceptor solution is frequently replaced with fresh buffer solution so that the solution in contact with the membrane contains no more than a few percent of the total sample concentration at any time. This condition can be called a physically maintained sink. Under pseudo-steady state (when a practically linear solute concentration gradient is established in the membrane phase see Chapter 2), lipophilic molecules will distribute into the cell monolayer in accordance with the effective membrane-buffer partition coefficient, even when the acceptor solution contains nearly zero sample concentration (due to the physical sink). If the physical sink is maintained indefinitely, then eventually, all of the sample will be depleted from both the donor and membrane compartments, as the flux approaches zero (Chapter 2). In conventional Caco-2 data analysis, a very simple equation [Eq. (7.10) or (7.11)] is used to calculate the permeability coefficient. But when combinatorial (i.e., lipophilic) compounds are screened, this equation is often invalid, since a considerable portion of the molecules partitions into the membrane phase during the multitimepoint measurements. [Pg.138]

Once a test 1s complete, another menu option provides the data analysis results 1n the form of a hard-copy report printed on the local line printer. The report Includes experiment identification Information and the apparent viscosities calculated for each data set. A subset of the data analysis program 1s scheduled automatically by the control programs while the experiment is 1n progress and provides immediate on-line analysis of apparent viscosities for each data set as It 1s collected. The results are viewed using the real-time data display program (Figure 5). [Pg.121]

An Instron Tensile Tester Model TM was Interfaced to a micro-computer for data collection and transmission to a minicomputer. A FORTRAN program was developed to allow data analysis by the minicomputer. The program generates stress-strain curves from the raw data, calculates physical parameters, and produces reports and plots. [Pg.123]


See other pages where Data analysis calculation is mentioned: [Pg.268]    [Pg.488]    [Pg.108]    [Pg.268]    [Pg.488]    [Pg.108]    [Pg.1400]    [Pg.671]    [Pg.7]    [Pg.233]    [Pg.1273]    [Pg.51]    [Pg.96]    [Pg.230]    [Pg.213]    [Pg.682]    [Pg.259]    [Pg.34]    [Pg.650]    [Pg.448]    [Pg.544]    [Pg.38]    [Pg.44]    [Pg.109]    [Pg.72]    [Pg.286]    [Pg.118]    [Pg.159]    [Pg.284]   
See also in sourсe #XX -- [ Pg.159 ]




SEARCH



Electron Calculations and the Analysis of Experimental Data

© 2024 chempedia.info