Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural Network Potentials

Behler and Parrinello [33] presented a new scheme for generating interatomic potentials using neural networks that are trained to reproduce quantum mechanical data. The main assumption of the model is that the total energy of an atomic system can be described as a sum of atomic contributions [Pg.44]


The Gaussian Approximation Potential scheme is similar to the Neural Network potentials introduced by Behler and Parinello [33], as both uses non-linear, non-parametric regression instead of fixed analytic forms. However, the representation of the atomic environments in GAP is complete and the Gaussian Process uses energies and forces for regression. Moreover, the training of the neural... [Pg.46]

Neural network potential-energy surfaces for atomistic simulations... [Pg.11]

Low-dimensional neural network potential-energy surfaces... [Pg.15]

To date, neural network potentials have been most frequently applied to represent low-dimensional, molecular PESs. Apart from the central role of molecules in chemistry the main reason for this is certainly the comparably simple mapping of the reference points using electronic structure methods. Low-dimensional NN PESs have been constructed for example for the of ground state and excited state PESs and transition dipole moments of the HCl" ion, " for the OH radical in order to calculate the vibrational levels,for Hi " to calculate rovibrational spectra,for the free H2O molecule, for the dissociation of a Si02 molecule, for the HOOH molecule, for the NOCl molecule, for formaldehyde, for the cis-trans isomerization and dissociation dynamics of nitrous add, " for H + HBr, " for the reaction 0H + H2 H20 + H, for the reaction BeH + H2->BeH2 + H, " for small silicon clusters, " for vinyl bromide, to describe the three-body interaction energies in the H2O-... [Pg.15]

Incorporation of the symmetry into neural network potentials... [Pg.16]

A very important aspect of setting up a neural network potential is the choice of the input coordinates. Cartesian coordinates cannot be used as input for NNs at all, because they are not invariant with respect to rotation and translation of the system. Since the NN output depends on the absolute numbers fed into the NN in the input nodes, simply translating or rotating a molecule would change its energy. Instead, some form of internal coordinates like interatomic distances and bond angles or functions of these coordinates should be used. To define a non-periodic structure containing N atoms uniquely, 3N-6 coordinates are required. However, for NNs redundant information does not pose a problem, and sometimes the complete set of N(N— l)/2 interatomic distances is used." ... [Pg.16]

Systematic approaches to neural network potentials for molecular systems... [Pg.19]

High-dimensional neural network potentials based on a sum of bond energies... [Pg.26]

In the following, we summarize the results of our preliminary calculations for the GeTe alloy, a prototypical PCM. Here we should mention that important progress has been made recently in the construction of a novel, classical neural-network potential for GeTe [60], fitted against ab initio data, which shows an accuracy comparable to that of AIMD and is 4 orders of magnitude faster than the AIMD method by Ktihne et al. [45]. However, since the development of neural-network potentials for 3- and 4-component PCMs remains a challenge, AMD will likely be the only viable method for a systematic study of PCMs in the years to come. [Pg.78]

The recently developed neural-network potential for GeTe should allow one to reconstruct the FES of GeTe as a function of temperature, at an affordable computational cost. It should also enable a systematic optimization of the CVs and of the parameters of MTD, which could then be employed for ab initio MTD investigations of nucleation in chemically more complex PCMs. [Pg.82]

The possibilities for the application for neural networks in chemistry arc huge [10. They can be used for various tasks for the classification of structures or reactions, for establishing spcctra-strncturc correlations, for modeling and predicting biological activities, or to map the electrostatic potential on molecular surfaces. [Pg.464]

Artificial Neural Networks have the following potential advantages for intelligent control ... [Pg.348]

Partial Least Squares (PLS) regression (Section 35.7) is one of the more recent advances in QSAR which has led to the now widely accepted method of Comparative Molecular Field Analysis (CoMFA). This method makes use of local physicochemical properties such as charge, potential and steric fields that can be determined on a three-dimensional grid that is laid over the chemical stmctures. The determination of steric conformation, by means of X-ray crystallography or NMR spectroscopy, and the quantum mechanical calculation of charge and potential fields are now performed routinely on medium-sized molecules [10]. Modem optimization and prediction techniques such as neural networks (Chapter 44) also have found their way into QSAR. [Pg.385]

The role of an artificial neural network is to discover the relationships that link patterns of input data to associated output data. Suppose that a database contains information on the structure of many potential drug molecules (the input) and their effectiveness in treating some specific disease (the output). Since the clinical value of a drug must in some way be related to its molecular structure, correlations certainly exist between structure and effectiveness, but those relationships may be very subtle and deeply buried. [Pg.9]

Overfitting is a potentially serious problem in neural networks. It is tackled in two ways (1) by continually monitoring the quality of training as it occurs using a test set, and (2) by ensuring that the geometry of the network (its size and the way the nodes are connected) is appropriate for the size of the dataset. [Pg.38]

Figure 11.6 Principal construction of a neural network. The input and output data are examples of potential data sources of interest. Figure 11.6 Principal construction of a neural network. The input and output data are examples of potential data sources of interest.
Most AI methods used in science lie within one of three areas evolutionary methods, neural networks and related methods, and knowledge-based systems. Additional methods, such as automated reasoning, hybrid systems, fuzzy logic, and case-based reasoning, are also of scientific interest, but this review will focus on the methods that seem to offer the greatest near-term potential in science. [Pg.350]

Huang and Tang49 trained a neural network with data relating to several qualities of polymer yarn and ten process parameters. They then combined this ANN with a genetic algorithm to find parameter values that optimize quality. Because the relationships between processing conditions and polymer properties are poorly understood, this combination of AI techniques is a potentially productive way to proceed. [Pg.378]

Second, Reinhardt and Hubbard (1998) performed a prediction using neural networks. From some statistical consideration, they selected three locations for prokaryotes (cytoplasmic, extracellular, and periplasmic) and four locations for eukaryotes, excluding plants (cytoplasmic, extracellular, mitochondrial, and nuclear). They did not include the membrane proteins because they can be distinguished rather reliably using existing methods. One potential problem of their analysis is that they only excluded sequence pairs with more than 90% identity. Nevertheless, the distinctions between pairs of groups were rather clear. The high accuracy between nuclear and cytoplasmic proteins was especially impressive. [Pg.329]

General regression neural network methodology is potentially more useful for predicting the QSPR relationships as compared to multiple linear regressions. [Pg.553]

The applications of NN to solvent extraction, reported in section 16.4.6.2., suffer from an essential limitation in that they do not apply to processes of quantum nature therefore they are not able to describe metal complexes in extraction systems on the microscopic level. In fact, the networks can describe only the pure state of simplest quantum systems, without superposition of states. Neural networks that indirectly take into account quantum effects have already been applied to chemical problems. For example, the combination of quantum mechanical molecular electrostatic potential surfaces with neural networks makes it possible to predict the bonding energy for bioactive molecules with enzyme targets. Computational NN were employed to identify the quantum mechanical features of the... [Pg.707]


See other pages where Neural Network Potentials is mentioned: [Pg.44]    [Pg.45]    [Pg.11]    [Pg.14]    [Pg.25]    [Pg.78]    [Pg.437]    [Pg.44]    [Pg.45]    [Pg.11]    [Pg.14]    [Pg.25]    [Pg.78]    [Pg.437]    [Pg.362]    [Pg.10]    [Pg.507]    [Pg.41]    [Pg.101]    [Pg.268]    [Pg.2]    [Pg.5]    [Pg.10]    [Pg.53]    [Pg.194]    [Pg.374]    [Pg.386]    [Pg.127]    [Pg.295]    [Pg.78]    [Pg.708]   


SEARCH



Neural network

Neural networking

© 2024 chempedia.info