Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data model consequences

Such conformational dependence presents challenges and an opportunity. The challenges he in properly accounting for its consequences. In many cases, exact conformational energetics and populations in a sample may be unknown, and the nature of the sample inlet may sometimes also mean that a Boltzmann distribution cannot be assumed. Introducing this uncertainty into the data modeling process produces some corresponding uncertainty in the theoretical interpretation of data... [Pg.319]

Danek and his group have independently proposed a quite similar model, which they call the dissociation modeV - For this model Olteanu and Pavel have presented a versatile numerical method and its computing program. However, they calculated only the electrical conductivity or the molar conductivity of the mixtures, and the deviation of the internal mobilities of the constituting cations from the experimental data is consequently vague. [Pg.149]

The use of transition state theory as a convenient expression of rate data is obviously complex owing to the presence of the temperature-dependent partition functions. Most researchers working in the area of chemical kinetic modeling have found it necessary to adopt a uniform means of expressing the temperature variation of rate data and consequently have adopted a modified Arrhenius form... [Pg.50]

The authors Stahley and Strobek believe that this two-metal ion intermediate describes the catalytic site bonding characteristics better than that of the three-metal ion described by Herschlag and Piccirilli. Although Stahley and Strobel s results for PDB IZZN do not rule out a disordered third metal ion in their crystal structure, they believe that the great majority of the biochemical data is explained by their two-metal model. Consequently, they would equate their (Mi) with reference 27 s Ma and their Mgi° (M2) with... [Pg.258]

Although every subvalidation experiment draws from the same set of calibration data, each individual subvalidation involves the application of a model to a set of data not used to build the model. Consequently, cross validation tends to generate more realistic results than the model lit (RMSEE, Equation 12.11). [Pg.410]

Recently, on-line FBRM, ATR-FTIR spectroscopy, Raman spectroscopy and PLS were used to moifitor a complex crystallization system a racemic free base of a given componnd and a chiral acid. The anthors first demonstrate that the diastereomeric composition can be estimated nsing Raman spectral data, slnrry density and temperature using a PLS model. Consequently the issne of on-line slurry density prediction, which is not readily available, arises. An additional PLS model was constructed that used the ATR-FTIR spectral data to infer slurry density. Slurry density as predicted in real-time via ATR-FTIR spectroscopy was fed into the aforementioned Raman, slurry density and temperatnre PLS model to yield a more accnrate estimate of the fractional solid composition of the two diastereomers. ... [Pg.443]

All trap-spectroscopic techniques that are based on thermal transport properties have in common that the interpretation of empirical data is often ambiguous because it requires knowledge of the underlying reaction kinetic model. Consequently, a large number of published trapping parameters—with the possible exception of thermal ionization energies in semiconductors—are uncertain. Data obtained with TSC and TSL techniques, particularly when applied to photoconductors and insulators, are no exceptions. [Pg.9]

Electronic models (first proposed by Jaffe 1977) deal with primarily accelerated electrons and rely on the existence of an efficient and omnipresens re-acceleration mechanism. These models also invoke low values of the volume averaged magnetic held I3IX 0.2 (Fusco-Femiano et al. 2004), require a rather high CR injection to fit the available HXR data, predict consequently a substantial amount of gamma-ray emission which is spatially concentrated,... [Pg.91]

Table 5.2 compares the observed and calculated vibrational frequencies for each Al-organic acid complex wherever the experimental data are available. An example correlation is plotted in Figure 5.5. The excellent correlation between theory and experiment substantiates the accuracy of our methodology. When27A1 NMR spectra are available, the same complex that fits the vibrational frequencies also fits the observed 27A1 chemical shift.67 The fact that the same complexes that reproduce vibrational frequencies also reproduce the 827A1 values is a strong indicator that the complexes are realistically modeled. Consequently, we can use these ab initio results... [Pg.135]

Consequently, a product data model defines the form and the content of product data [839]. It classifies, structures, and organizes the product data and specifies their mutual relationships. [Pg.85]

Model identification is an iterative process. There are several software packages with modules that automate time series model development. When a model is developed to describe data that have stochastic variations, one has to be cautious about the degree of fit. By increasing model complexity (adding extra terms) a better fit can be obtained. But, the model may describe part of the stochastic variation in that particular data which will not occur identically in other data sets. Consequently, although the fit to the training data may be improved, the prediction errors may get worse. [Pg.85]

Data models and schema at the data sources continue to change, and consequently the data wrappers for the federation engine need to be modified. [Pg.358]

XQuery (XML Query) Flexible query facilities used to extract data from real and virtual documents by way of XML notation in a file system or on the World Wide Web. XQuery consequently provides interaction and data exchange between the web world and the database world and ultimately enables collections of XML files to be accessed like databases. The XML Query project of the W3C Consortium includes not only the standard for querying XML documents but also the next-generation standards for doing XML selection (XPath2), for XML serialization, for full-text search, for a possible functional XML data model, and for a standard set of functions and operators for manipulating web data. [Pg.526]

By contrast, a research model is designed to test a hypothesis and will have a clear mechanistic base and a comprehensive data requirement. Validation becomes an integral part of the process in suggesting further refinements in the model and/or the hypothesis. The model can also suggest possible research directions. A research model will be more restricted in scope than an evaluative model. Consequently extrapolations from a restricted research model or expecting evaluative models to provide precise quantitative assessments would be examples of misuse. [Pg.370]

There are a number of consequences of using the NCBl data model for building databases and generating reports. Some of these are discussed in the remainder of this section. [Pg.41]

In recent years, non-compartmental or model-independent approaches to pharmacokinetic data analysis have been increasingly utilized since this approach permits the analysis of data without the use of a specific compartment model. Consequently, sophisticated, and often complex, computational methods are not required. The statistical or non-compartmental concept was first reported by Yamaoka in a general manner and by Cutler with specific application to mean absorption time. Riegelman and Collier reviewed and clarified these concepts and applied statistical moment theory to the evaluation of in vivo absorption time. This concept has many additional significant applications in pharmacokinetic calculations. [Pg.361]

Some comparisons of a hierarchical data model with a relational data model are of interest here. The structures in the hierarchical model represent the information that is contained in the fields of the relational model. In a hierarchical model, certain records must exist before other records can exist. The hierarchical model is generally required to have only one key field. In a hierarchical data model, it is necessary to repeat some data in a descendant record that need be stored only once in a relational database regardless of the number of relations. This is so because it is not possible for one record to be a descendant of more than one parent record. There are some unfortunate consequences of the mathematics involved in creating a hierarchical tree, as contrasted with relations among records. Descendants cannot be added without a root leading to them, for example. This leads to a number of undesirable characteristic properties of hierarchical models that may affect our ability to easily add, delete, and update or edit records. [Pg.121]

The symbol / models the fact that a resident can leave the nursing home or corresponds to the last resident assessment during the last 30 days before the data extraction. It is only used when we want to deduce the Markov model. Consequently, in the following example, the symbol / does not exist in Fig. 6. [Pg.98]

The relational data model has several distinctive features as a consequence of its simple mathematical characterization ... [Pg.111]

As a consequence of the central role that relations retain in object-relational data models, one crucial difference with respect to the object-oriented case is that the role played by object identity is relaxed to an optional, rather than mandatory, feature. Thus, an object-relational DBMS stands in an evolutionary path regarding relational ones, whereas object-oriented ones represent a complete break with the relational approach. In this context, notice that while a tuple of type constructor may allow a relation type to be supported, each tuple will have an identity, and attribute names will be explicitly needed to retrieve and interact with values. [Pg.114]

Such differences at the data model level lead to pragmatic consequences of some significance at the level of the languages used to interact with application entities. In particular, while object-oriented data models naturally induce a navigational approach to accessing values, this leads to chains of reference of indefinite length that need to be traversed, or navigated. [Pg.114]

Persistence layer adaptation of the ISA-95 object model. In order to implement the Estimate production dates WS it was necessary to write a simple scheduler, and the scheduler itself would need a description of the current resources and processes available in the enterprise. Consequently, an appropriate data model had to be developed. [Pg.153]

A library of AltaRica nodes was developed in order to help building safety models for avionics platform architectures. This library includes AltaRica nodes that describe functional, logical and physical nodes. The library was used in order to develop safety models to assess the safety of Integrated Modular Avionics [9]. In this type of architecture, computation or communication resources are shared by several functions or data flows. Consequently, the fault of a shared physical node has an impact on all logical and functional nodes that are connected with this physical node. For instance, in the previous figure, data flows DataFlowla and DataFlowlb are connected to the same physical node called Phy Iteml. So if Phy Iteml is lost then both data flows would be lost. [Pg.272]


See other pages where Data model consequences is mentioned: [Pg.74]    [Pg.60]    [Pg.200]    [Pg.45]    [Pg.57]    [Pg.122]    [Pg.361]    [Pg.109]    [Pg.110]    [Pg.180]    [Pg.181]    [Pg.232]    [Pg.744]    [Pg.745]    [Pg.42]    [Pg.428]    [Pg.77]    [Pg.77]    [Pg.496]    [Pg.5]    [Pg.10]    [Pg.239]    [Pg.122]    [Pg.17]    [Pg.397]   
See also in sourсe #XX -- [ Pg.77 , Pg.79 ]




SEARCH



Consequence modeling

Data modeling

© 2024 chempedia.info