Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data Analysis Quantitative modeling

The most frequently used technique for the determination of crystal structures is single crystal analysis. However, if no single crystals of suitable size and quality are available, powder diffraction is the nearest alternative. Furthermore, single crystal analysis does not provide information on the bulk material and is not a routinely used technique for the determination of microstructural properties. Neither is it often used to characterize disorder in materials. Studies of macroscopic stresses in components, both residual from processing and in situ under load, are studied by powder diffraction, as is the texture of polycrystalline samples. Powder diffraction remains to this day a crucial tool in the characterization of materials, with increasing importance and breadth of application as instrumentation, methods, data analysis and modeling become more powerful and quantitative. [Pg.588]

The powder diffraction experiment (WAXS) remains a crucial tool in the characterization of materials, and it has been used for many decades with increasing importance and breadth of application as instrumentation, methods, data analysis, and modeling become more powerful and quantitative. Although powder data usually lack the 3-D of the diffraction image, the fundamental nature of the method is easily appreciated from the fact that each powder diffraction pattern represents 1-D snapshot of the 3-D reciprocal lattice of a crystal. The quality of the data is usually limited by the resolution of the powder diffractometer and by the physical and chemical conditions of the specimen (Figure 8.2). [Pg.84]

Stochastic identification techniques, in principle, provide a more reliable method of determining the process transfer function. Most workers have used the Box and Jenkins [59] time-series analysis techniques to develop dynamic models. An introduction to these methods is given by Davies [60]. In stochastic identification, a low amplitude sequence (usually a pseudorandom binary sequence, PRBS) is used to perturb the setting of the manipulated variable. The sequence generally has an implementation period smaller than the process response time. By evaiuating the auto- and cross-correlations of the input series and the corresponding output data, a quantitative model can be constructed. The parameters of the model can be determined by using a least squares analysis on the input and output sequences. Because this identification technique can handle many more parameters than simple first-order plus dead-time models, the process and its related noise can be modeled more accurately. [Pg.142]

Artificial Intelligence in Chemistry Chemical Engineering Expert Systems Chemometrics Multivariate View on Chemical Problems Electrostatic Potentials Chemical Applications Environmental Chemistry QSAR Experimental Data Evaluation and Quality Control Fuzzy Methods in Chemistry Infrared Data Correlations with Chemical Structure Infrared Spectra Interpretation by the Characteristic Frequency Approach Machine Learning Techniques in Chemistry NMR Data Correlation with Chemical Structure Protein Modeling Protein Structure Prediction in ID, 2D, and 3D Quality Control, Data Analysis Quantitative Structure-Activity Relationships in Drug Design Quantitative Structure-Property Relationships (QSPR) Shape Analysis Spectroscopic Databases Structure Determination by Computer-based Spectrum Interpretation. [Pg.1826]

The most difficult kind of e xperimental problem is one in which quantitative data analysis is model dependent. Kinetics experiments are notorious in this regard, since one first assumes a mechanism, uses the mechanism to derive a rate law in terms of the rate constants, and then fits the raw data to the rate law. Any problems with the assumed mechanism carries over into the conclusions drawn from the data. This problem is particularly delicate for fluorescence decay measurements for several reasons ... [Pg.33]

One task of data analysis is to establish a model which quantitatively describes the relationships between data variables and can then be used for prediction. [Pg.446]

Often the goal of a data analysis problem requites more than simple classification of samples into known categories. It is very often desirable to have a means to detect oudiers and to derive an estimate of the level of confidence in a classification result. These ate things that go beyond sttictiy nonparametric pattern recognition procedures. Also of interest is the abiUty to empirically model each category so that it is possible to make quantitative correlations and predictions with external continuous properties. As a result, a modeling and classification method called SIMCA has been developed to provide these capabihties (29—31). [Pg.425]

The complexity of the swelling kinetics of hydrogels means that only the simplest cases can be modeled quantitatively. Thus this section focuses on identification of rate-influencing phenomena and data analysis rather than the extensive theoretical modeling of the kinetic phenomena that has been done on this subject. Reviews of theoretical modeling include those by Peppas and Korsmeyer [119], Frisch [120], and Windle [121],... [Pg.521]

For acute releases, the fault tree analysis is a convenient tool for organizing the quantitative data needed for model selection and implementation. The fault tree represents a heirarchy of events that precede the release of concern. This heirarchy grows like the branches of a tree as we track back through one cause built upon another (hence the name, "fault tree"). Each level of the tree identifies each antecedent event, and the branches are characterized by probabilities attached to each causal link in the sequence. The model appiications are needed to describe the environmental consequences of each type of impulsive release of pollutants. Thus, combining the probability of each event with its quantitative consequences supplied by the model, one is led to the expected value of ambient concentrations in the environment. This distribution, in turn, can be used to generate a profile of exposure and risk. [Pg.100]

Since a larger sample volume is presumed to be probed, the use of transmission mode has led to simpler, more accurate models requiring fewer calibration samples [50]. Scientists at AstraZeneca found that with a transmission Raman approach as few as three calibration samples were required to obtain prediction errors nearly equivalent to their full model [42]. For a fixed 10-s acquisition time, the transmission system had prediction errors as much as 30% less than the WAI system, though both approaches had low errors. It is hoped that this approach in combination with advanced data analysis techniques, such as band target entropy minimization (BTEM) [51], might help improve Raman s quantitative sensitivity further. [Pg.210]

PK models (Section 13.2.4), PD models (Section 13.2.5), and PK/PD models (Section 13.2.6) can be used in two different ways, that is, in simulations (Section 13.2.7) and in data analysis (Section 13.2.8). Simulations can be performed if the model structure and its underlying parameter values are known. In fact, for any arbitrary dose or dosing schedule the drug concentration profile in each part of the model can be calculated. The quantitative measures of the effectiveness of drug targeting (Section 13.4) can also be evaluated. If actual measurements have been performed in in-vivo experiments in laboratory animals or man, the relevant model structure and its parameter values can be assessed by analysis of plasma disappearance curves, excretion rate profiles, tissue concentration data, and so forth (Section 13.2.8). [Pg.338]

A more common use of informatics for data analysis is the development of (quantitative) structure-property relationships (QSPR) for the prediction of materials properties and thus ultimately the design of polymers. Quantitative structure-property relationships are multivariate statistical correlations between the property of a polymer and a number of variables, which are either physical properties themselves or descriptors, which hold information about a polymer in a more abstract way. The simplest QSPR models are usually linear regression-type models but complex neural networks and numerous other machine-learning techniques have also been used. [Pg.133]

Clearly, environmental chamber studies are very useful tools in examining the chemical relationships between emissions and air quality and for carrying out related (e.g., exposure) studies. Use of these chambers has permitted the systematic variation of individual parameters under controlled conditions, unlike ambient air studies, where the continuous injection of pollutants and the effects of meteorology are often difficult to assess and to quantitatively incorporate into the data analysis. Chamber studies have also provided the basis for the validation of computer kinetic models. Finally, they have provided important kinetic and mechanistic information on some of the individual reactions occurring during photochemical smog formation. [Pg.880]

Israels, A.Z. Applied Stochastic Models and Data Analysis 1986, 2, 121-130. Johnels, P. Gilner, M. Norden, B. Toftgard, R. Gustafsson, Quantitative Structure-Activity Relationships 1989, 8, 83-89. [Pg.108]

The use of statistical tests to analyze and quantify the significance of sample data is widespread in the study of biological systems where precise physical models are not readily available. Statistical tests are used in conjunction with measured data as an aid to understanding the significance of a result. Their aid in data analysis fills a need to answer the question of whether or not the inferences drawn from the data set are probable and statistically relevant. The statistical tests go further than a mere qualitative description of relevance. They are designed to provide a quantitative number for the probability that the stated hypothesis about the data is either true or false. In addition, they allow for the assessment of whether there are enough data to make a reasonable assumption about the system. [Pg.151]

The rather complex issue of chemical kinetics has been discussed in a quantitative way, in order to stress out two main ideas, namely, the necessity of resorting to simplified kinetic models and the need of adequate methods of data analysis to estimate the kinetic parameters. These results introduce Chap. 3, in which basic concepts and up-to-date methods of identification of kinetic parameters are presented. [Pg.37]

Traditionally, it was necessary to maintain constant pH and ionic strength in order to quantitatively model and analyze such reactions. Methods for the analysis of the above nonideal data sets have been published [34, 35],... [Pg.255]

Model-based nonlinear least-squares fitting is not the only method for the analysis of multiwavelength kinetics. Such data sets can be analyzed by so-called model-free or soft-modeling methods. These methods do not rely on a chemical model, but only on simple physical restrictions such as positiveness for concentrations and molar absorptivities. Soft-modeling methods are discussed in detail in Chapter 11 of this book. They can be a powerful alternative to hard-modeling methods described in this chapter. In particular, this is the case where there is no functional relationship that can describe the data quantitatively. These methods can also be invaluable aids in the development of the correct kinetic model that should be used to analyze the data by hard-modeling techniques. [Pg.257]

Thousands of chemical compounds have been identified in oils and fats, although only a few hundred are used in authentication. This means that each object (food sample) may have a unique position in an abstract n-dimensional hyperspace. A concept that is difficult to interpret by analysts as a data matrix exceeding three features already poses a problem. The art of extracting chemically relevant information from data produced in chemical experiments by means of statistical and mathematical tools is called chemometrics. It is an indirect approach to the study of the effects of multivariate factors (or variables) and hidden patterns in complex sets of data. Chemometrics is routinely used for (a) exploring patterns of association in data, and (b) preparing and using multivariate classification models. The arrival of chemometrics techniques has allowed the quantitative as well as qualitative analysis of multivariate data and, in consequence, it has allowed the analysis and modelling of many different types of experiments. [Pg.156]


See other pages where Data Analysis Quantitative modeling is mentioned: [Pg.343]    [Pg.345]    [Pg.165]    [Pg.219]    [Pg.403]    [Pg.5]    [Pg.511]    [Pg.52]    [Pg.45]    [Pg.317]    [Pg.327]    [Pg.343]    [Pg.59]    [Pg.488]    [Pg.267]    [Pg.47]    [Pg.315]    [Pg.441]    [Pg.348]    [Pg.399]    [Pg.311]    [Pg.437]    [Pg.292]    [Pg.9]    [Pg.174]    [Pg.31]    [Pg.40]    [Pg.4]    [Pg.26]    [Pg.92]    [Pg.200]    [Pg.345]   
See also in sourсe #XX -- [ Pg.89 , Pg.90 , Pg.91 , Pg.92 , Pg.93 , Pg.94 , Pg.95 , Pg.96 , Pg.97 , Pg.98 , Pg.99 , Pg.100 , Pg.101 , Pg.102 , Pg.103 , Pg.104 , Pg.105 ]




SEARCH



Data modeling

Model analysis

Models quantitative

Quantitative data

© 2024 chempedia.info