Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data interpretation factors

The other parameters used in the calculation of STOMP and GIIP have been discussed in Section 5.4 (Data Interpretation). The formation volume factors (B and Bg) were introduced in Section 5.2 (Reservoir Fluids). We can therefore proceed to the quick and easy deterministic method most frequently used to obtain a volumetric estimate. It can be done on paper or by using available software. The latter is only reliable if the software is constrained by the geological reservoir model. [Pg.155]

Table 1 Influence of various factors on the use of AI techniques for NDT data interpretation. Table 1 Influence of various factors on the use of AI techniques for NDT data interpretation.
The aim of factor analysis is to calculate a rotation matrix R which rotates the abstract factors (V) (principal components) into interpretable factors. The various algorithms for factor analysis differ in the criterion to calculate the rotation matrix R. Two classes of rotation methods can be distinguished (i) rotation procedures based on general criteria which are not specific for the domain of the data and (ii) rotation procedures which use specific properties of the factors (e.g. non-negativity). [Pg.254]

Indirect Immunotoxic Effects. A problem related to data interpretation is how to distinguish secondary effects that may indirectly result in immunotoxicity from the primary effects of immunotoxicity in preclinical toxicity studies. Various factors may produce pathology similar to that of an immunotoxin ... [Pg.585]

However, the amount of error in the data is not generally the limiting factor in data interpretation. Rather, the locations at which the data are taken most severely hinder progress toward a mechanistic model. Reference to Fig. 1 indicates that the decision between the dual- and single-site models would be quite difficult, even with very little error of measurement, if data are taken only in the 2- to 10-atm range. However, quite substantial error can be tolerated if the data lie above 15 atm total pressure (assuming data can be taken here). Techniques are presented that will seek out such critical experiments to be run (Section VII). [Pg.100]

The advent of analytical techniques capable of providing data on a large number of analytes in a given specimen had necessitated that better techniques be employed in the assessment of data quality and for data interpretation. In 1983 and 1984, several volumes were published on the application of pattern recognition, cluster analysis, and factor analysis to analytical chemistry. These treatises provided the theoretical basis by which to analyze these environmentally related data. The coupling of multivariate approaches to environmental problems was yet to be accomplished. [Pg.293]

Grimalt and coworkers have used a variety of data interpretation techniques to show the influence of lake temperature and lake altitude on POC concentrations in fish [4, 59, 60, 63, 64]. It is important to bear in mind that all sorts of factors may be contributing to differences in the concentrations of POC in fish from different lakes, including the distance of the lakes from POC source regions [11]. Such factors may confound a interpretation of increasing POC concentrations in fish with altitude in terms of mountain cold-trapping. For example, the age of organisms is a factor in... [Pg.166]

AltJiough the demonstration of some degree of maternal toxicity is required in regulatory developmental toxicology studies for both pharmaceuticals and chemicals (1 ), marked maternal toxicity may be a confounding factor in study design and data interpretation. [Pg.311]

It is often best to begin with purified proteins when studying the relationships between protein structure and function at the molecular level. The presence of multiple proteins often complicates data interpretation, as it is not clear if effects are due to protein interactions, variations in the ratio of proteins, or to other factors. In these studies it is advisable to select tests based on a fundamental physical or chemical property, since results are less likely to vary with the test conditions or instrumentation used. Unfortunately, it becomes less likely that the property under study will relate directly to function in a food system when such simplified (often dilute) systems are used. Something as seemingly insignificant as protein concentration in the model system can have a large influence on the results obtained. Also, the relative importance or contribution of a functional property to a complex food system can be misinterpreted in a purified model system. [Pg.292]

In general, TREs will be most successful when an effluent is consistently toxic, if the loss of toxicity is minimal over time and factors contributing to toxicity do not vary between samples. Conversely, the process can be more difficult if toxicity is transient, if the samples quickly lose toxicity over time, or if the factors contributing to toxicity are variable (i.e., different causative agents). Data interpretations can also be complicated by low contaminant concentrations and marginal toxicity. For example, it can be difficult to discern differences in toxicity between a toxic final effluent and TIE treatments when the mortality in the full strength (100%) effluent is close to 50% (Novak et al., 2002). [Pg.200]

We can easily quantify measurement error due to existence of a well-developed approach to analytical methods and laboratory QC protocols. Statistically expressed accuracy and precision of an analytical method are the primary indicators of measurement error. However, no matter how accurate and precise the analysis may be, qualitative factors, such as errors in data interpretation, sample management, and analytical methodology, will increase the overall analytical error or even render results unusable. These qualitative laboratory errors that are usually made due to negligence or lack of information may arise from any of the following actions ... [Pg.7]

Amide III VCD from aqueous solution was published in 1987 [31], and normal coordinate analyses of simple peptides and a number of isotopomers were carried out to define the exact nature of the amide III vibration [32]. Recently, we have reported a detailed comparison of computational and experimental VCD results in the amide I region [33]. Keiderling has pushed the frontiers toward collecting VCD data on a number of proteins, and interpreting the data, via factor analysis, in terms of percentages of the common secondary structures [34,35]. A number of excellent reviews, summarizing the progress in peptide VCD in the 1985-1991 time span, have appeared [36,37],... [Pg.107]

By definition, all interpretive methods of optimization require knowledge of the capacity factors of all individual solutes. This is the fundamental difference between the simultaneous and sequential methods of optimization (sections 5.2 and 5.3, respectively) and the interpretive methods of section 5.5. Moreover, in the specific cases in which only a limited number of components is of interest or in which weighting factors are assigned to the individual solutes (see section 4.6.1) it is also necessary to recognize the individual peaks (at least the relevant ones) in each chromatogram. In section 5.5 we have tacitly assumed that it would be possible to obtain the retention data (capacity factors) of all the individual solutes at each experimental location. [Pg.233]

A9.3.5.6.2 Where instability is a factor in determining the level of exposure during the test, an essential prerequisite for data interpretation is the existence of measured exposure concentrations at suitable time points throughout the test. In the absence of analytically measured concentrations at least at the start and end of test, no valid interpretation can be made and the test should be considered as invalid for classification purposes. Where measured data are available, a number of practical rules can be considered by way of guidance in interpretation ... [Pg.455]

A few pyrolysis studies done on yeasts and yeast-like fungi did not attempt to analyze individual polysaccharides but to obtain a fingerprint characterization [67], It was also common to use statistical techniques such as factor analysis for the data interpretation. It was not unusual to find N-acetylamino sugar units in fungal polysaccharides. These units showed characteristic peaks in Py-MS that allowed the distinction of different materials. [Pg.305]


See other pages where Data interpretation factors is mentioned: [Pg.359]    [Pg.552]    [Pg.305]    [Pg.491]    [Pg.129]    [Pg.237]    [Pg.51]    [Pg.69]    [Pg.402]    [Pg.403]    [Pg.275]    [Pg.281]    [Pg.288]    [Pg.311]    [Pg.193]    [Pg.200]    [Pg.102]    [Pg.278]    [Pg.215]    [Pg.115]    [Pg.30]    [Pg.192]    [Pg.18]    [Pg.154]    [Pg.80]    [Pg.321]    [Pg.265]    [Pg.278]    [Pg.298]    [Pg.75]   


SEARCH



Data interpretation

Data interpretation factors species difference

Interpretable factors

Interpreting data

© 2024 chempedia.info