Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Data model using

Data model or data change - This indicates a change to either the type of data model used and/or the data value used in the model. The rationale for this is described in the appropriate field of the modelling tool ... [Pg.236]

Generally by the very nature of their inductive development process, systems developed asing AI techniques tend to show excellent performance against the particular data model used during the problem definition activity. Moreover, this performance can often be achieved for the expenditure of very low levels of effort n compared with conventional software systems [4]. For example, some estimates have placed the cost of development of AI type systems at perhaps one tenth to one hundredth of that associated with conventional systems to achieve the same purpose, moreover the maintenance effort assodated with the deployment of AI technology can also be very low, indicating a considerable level of user satisfaction with such systems once deployed. Where there are significant safety risks associated... [Pg.237]

Unfortunately, there are many heterogeneous data models used in these information sources, for example, geometric and kinematic models, wiring plans, behavior specifications, and software programs in various representations. The variety of data sources is a major challenge that may prevent the sufficiently effective and efficient data exchange between engineering applications and their users. [Pg.12]

Queries across different local data models using the syntax of common concepts should support collaboration and management at a project level by relying on capabilities for analysis of process data and facilitating project monitoring and control (Moser et al. 2011b). [Pg.183]

A. Iske (2002) Scattered data modelling using radial basis functions. Tutorials on Multiresolution in Geometric Modelling, A. Iske, E. Quak, and M.S. Floater (eds.). Springer, Heidelberg, 205-242. [Pg.208]

The primary purpose for expressing experimental data through model equations is to obtain a representation that can be used confidently for systematic interpolations and extrapolations, especially to multicomponent systems. The confidence placed in the calculations depends on the confidence placed in the data and in the model. Therefore, the method of parameter estimation should also provide measures of reliability for the calculated results. This reliability depends on the uncertainties in the parameters, which, with the statistical method of data reduction used here, are estimated from the parameter variance-covariance matrix. This matrix is obtained as a last step in the iterative calculation of the parameters. [Pg.102]

From these data, a first approach is to develop linear models using a relation of the following type ... [Pg.205]

Analytical models using classical reservoir engineering techniques such as material balance, aquifer modelling and displacement calculations can be used in combination with field and laboratory data to estimate recovery factors for specific situations. These methods are most applicable when there is limited data, time and resources, and would be sufficient for most exploration and early appraisal decisions. However, when the development planning stage is reached, it is becoming common practice to build a reservoir simulation model, which allows more sensitivities to be considered in a shorter time frame. The typical sorts of questions addressed by reservoir simulations are listed in Section 8.5. [Pg.207]

Several research groups have built models using theoretical desaiptors calculated only from the molecular structure. This approach has been proven to be particularly successful for the prediction of solubility without the need for descriptors of experimental data. Thus, it is also suitable for virtual data screening and library design. The descriptors include 2D (two-dimensional, or topological) descriptors, and 3D (three-dimensional, or geometric) descriptors, as well as electronic descriptors. [Pg.497]

Recently, several QSPR solubility prediction models based on a fairly large and diverse data set were generated. Huuskonen developed the models using MLRA and back-propagation neural networks (BPG) on a data set of 1297 diverse compoimds [22]. The compounds were described by 24 atom-type E-state indices and six other topological indices. For the 413 compoimds in the test set, MLRA gave = 0.88 and s = 0.71 and neural network provided... [Pg.497]

Each of these tools has advantages and limitations. Ab initio methods involve intensive computation and therefore tend to be limited, for practical reasons of computer time, to smaller atoms, molecules, radicals, and ions. Their CPU time needs usually vary with basis set size (M) as at least M correlated methods require time proportional to at least M because they involve transformation of the atomic-orbital-based two-electron integrals to the molecular orbital basis. As computers continue to advance in power and memory size, and as theoretical methods and algorithms continue to improve, ab initio techniques will be applied to larger and more complex species. When dealing with systems in which qualitatively new electronic environments and/or new bonding types arise, or excited electronic states that are unusual, ab initio methods are essential. Semi-empirical or empirical methods would be of little use on systems whose electronic properties have not been included in the data base used to construct the parameters of such models. [Pg.519]

W. B. DeMore and co-workers. Chemical Kinetics and Photochemical Data for Use in Stratospheric Ocyone Modeling Evaluation No. 8, JPL Publ. 87—41, Jet Propulsion Laboratory, Pasadena, Calif., 1987. [Pg.388]

Implementation Issues A critical factor in the successful application of any model-based technique is the availability of a suitaole dynamic model. In typical MPC applications, an empirical model is identified from data acquired during extensive plant tests. The experiments generally consist of a series of bump tests in the manipulated variables. Typically, the manipulated variables are adjusted one at a time and the plant tests require a period of one to three weeks. The step or impulse response coefficients are then calculated using linear-regression techniques such as least-sqiiares methods. However, details concerning the procedures utihzed in the plant tests and subsequent model identification are considered to be proprietary information. The scaling and conditioning of plant data for use in model identification and control calculations can be key factors in the success of the apphcation. [Pg.741]

Unknown Statistical Distributions Sixth, despite these problems, it is necessaiy that these data be used to control the plant and develop models to improve the operation. Sophisticated numerical and statistical methods have been developed to account for random... [Pg.2550]

Verneuil et al. (Verneuil, V.S., P. Yan, and F. Madron, Banish Bad Plant Data, Chemical Engineering Progress, October 1992, 45-51) emphasize the importance of proper model development. Systematic errors result not only from the measurements but also from the model used to analyze the measurements. Advanced methods of measurement processing will not substitute for accurate measurements. If highly nonlinear models (e.g., Cropley s kinetic model or typical distillation models) are used to analyze unit measurements and estimate parameters, the Hkelihood for arriving at erroneous models increases. Consequently, resultant models should be treated as approximations. [Pg.2564]

The accuracy of absolute risk results depends on (1) whether all the significant contributors to risk have been analyzed, (2) the realism of the mathematical models used to predict failure characteristics and accident phenomena, and (3) the statistical uncertainty associated with the various input data. The achievable accuracy of absolute risk results is very dependent on the type of hazard being analyzed. In studies where the dominant risk contributors can be calibrated with ample historical data (e.g., the risk of an engine failure causing an airplane crash), the uncertainty can be reduced to a few percent. However, many authors of published studies and other expert practitioners have recognized that uncertainties can be greater than 1 to 2 orders of magnitude in studies whose major contributors are rare, catastrophic events. [Pg.47]

Concerning the VDW parameters, the ability to directly apply previously optimized values makes convergence criteria unnecessary. If VDW parameter optimization is performed based on pure solvent or crystal simulations, then the heats of vaporization or sublimation should be within 2% of experimental values, and the calculated molecular or unit cell volumes should be also. If rare gas-model compound data are used, the references cited above should be referred to for a discussion of the convergence criteria. [Pg.33]


See other pages where Data model using is mentioned: [Pg.524]    [Pg.2]    [Pg.124]    [Pg.20]    [Pg.524]    [Pg.2]    [Pg.124]    [Pg.20]    [Pg.67]    [Pg.163]    [Pg.330]    [Pg.595]    [Pg.494]    [Pg.496]    [Pg.497]    [Pg.522]    [Pg.588]    [Pg.717]    [Pg.151]    [Pg.195]    [Pg.606]    [Pg.287]    [Pg.366]    [Pg.410]    [Pg.512]    [Pg.498]    [Pg.378]    [Pg.383]    [Pg.361]    [Pg.9]    [Pg.30]    [Pg.721]    [Pg.1350]    [Pg.1642]    [Pg.114]    [Pg.282]    [Pg.352]   
See also in sourсe #XX -- [ Pg.19 ]




SEARCH



Data modeling

Data used

Modeling, use

Use, data

Useful Data

© 2024 chempedia.info