Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural networks energy function

Molecular mechanics is a simple technique for scanning the potential energy surface of a molecule, molecular ion, crystal lattice or solvate. The model is based on a set of functions which may or may not be based on chemical and physical principles. These functions are parameterized based on experimental data. That is, the potential energy surface is not computed by fundamental theoretical expressions but by using functions whose parameters are derived empirically by reproducing experimentally observed data. Molecular mechanics then is, similar to a neural network, completely dependent on the facts that it has been taught. The quality of results to be obtained depends on the choice of the experimental data used for the parameterization. Clearly, the choice of potential energy functions is also of some importance. The most common model used is loosely derived from... [Pg.56]

Despite the fact that the neural network literature increasingly contains examples of radial basis function network applications, their use in genome informatics has rarely been -reported—not because the potential for applications is not there, but more likely due to a lag time between development of the technology and applications to a given field. Casidio et al. (1995) used a radial basis function network to optimally predict the free energy contributions due to hydrogen bonds, hydrophobic interactions and the unfolded state, with simple input measures. [Pg.46]

Shepard interpolation has been applied to several systems. Wu et al. have used it to construct the ground potential energy surface for the reaction CH4 -I- H -> CH3 -I- H2 [63]. Neural networks can be described as general, non-linear fitting functions that do not require any assumptions about the functional form of the... [Pg.107]

With the progress in the chemical calculation of the molecule, analyses with HOMO/LUMO (highest occupied molecular orbital/lowest unoccupied molecular orbital) energy, absolute hardness/absolute electron negativity, and research into these new and old descriptors have been reported recently. Furthermore, new methods with neural network computers, as well as multiregression analysis, cluster analysis, and major component analysis have been applied to investigate the relationship between the property and function of the molecule and of each descriptor. [Pg.94]

Balabin RM, Lomakina El (2009) Neural network approach to quantum-chemistry data accurate prediction of density functional theory energies. JChemPhys 131(7) 074104... [Pg.919]

Other recently published correlative methods for predicting Tg include the group interaction modeling (GIM) approach of Porter (42), neural networks (43-45), genetic function algorithms (46), the CODESSA (acronym for Comprehensive Descriptors for Structural and Statistical Analysis ) method (47), the energy, volume, mass (EVM) approach (48,49), correlation to the results of semiempirical quantum mechanical calculations of the electronic structure of the monomer (50), and a method that combines a thermodynamic equation-of-state based on lattice fluid theory with group contributions (51). [Pg.3584]

A very important aspect of setting up a neural network potential is the choice of the input coordinates. Cartesian coordinates cannot be used as input for NNs at all, because they are not invariant with respect to rotation and translation of the system. Since the NN output depends on the absolute numbers fed into the NN in the input nodes, simply translating or rotating a molecule would change its energy. Instead, some form of internal coordinates like interatomic distances and bond angles or functions of these coordinates should be used. To define a non-periodic structure containing N atoms uniquely, 3N-6 coordinates are required. However, for NNs redundant information does not pose a problem, and sometimes the complete set of N(N— l)/2 interatomic distances is used." ... [Pg.16]

The potential energy of the system is constructed as a sum of individual bond energies. The interactions are truncated using a cutolf function of the interatomic distance ry. The expressions for the repulsive pair potential Vji(rij) and the attractive pair potential F (ry) have been taken from the original Tersoff potential, but the bond order term by modulating the strength of the attractive potential contribution is expressed by a neural network. This many-body term depends on the local environment of the bonds. There is one separate NN for each bond in the system. For each of these bond, each atom bonded either to atom i or j provides an input vector for the NN of the bond ij. As discussed in the previous section, a major... [Pg.26]

Fig. 10 High-dimensional NN scheme suggested by Behler and Parrinello. First, the Cartesian coordinates Xj, Xj, Zj are transformed to a set of symmetry functions G,. These are then used as set of input values for atomic neural networks yielding the atomic energy contributions Ei. Finally, the atomic energies are added to obtain the total energy E of the system. Fig. 10 High-dimensional NN scheme suggested by Behler and Parrinello. First, the Cartesian coordinates Xj, Xj, Zj are transformed to a set of symmetry functions G,. These are then used as set of input values for atomic neural networks yielding the atomic energy contributions Ei. Finally, the atomic energies are added to obtain the total energy E of the system.
Also in chemistry artificial neural networks have found wide use. They have been used to fit spectroscopic data, to investigate quantitative structure-activity relationships (QSAR), to predict deposition rates in chemical vapor deposition, to predict binding sites of biomolecules, to derive pair potentials from diffraction data on liquids, " to solve the Schrodinger equation for simple model potentials like the harmonic oscillator, to estimate the fitness function in genetic algorithm optimizations, in experimental data analysis, to predict the secondary structure of proteins, to predict atomic energy levels, " and to solve classification problems from clinical chemistry, in particular the differentiation between diseases on the basis of characteristic laboratory data. ... [Pg.341]

Each node is represented by a grey circle. The goal is to set up a functional relation between the potential-energy of a system and the atomic structure. Therefore the output node of the NN provides the energy E. In general, feed-forward neural networks can have a vector of output nodes, but for the representation of PESs typically just one output node is used. In order to associate the energy with a structure, the atomic positions have to be presented to the NN in a suitable way. This is done in the nodes of the input layer. Each input node represents one degree of freedom G,-, i.e.. Fig. 2... [Pg.343]

Fig. 2 A small feed-forward neural network for the interpolation of a three-dimensional function, as indicated by the three nodes in the input layer. It has two hidden layers containing four and three nodes, respectively, and one node in the output layer providing the energy E. All fitting parameters are shown as arrows. The bias node acts as an adjustable offset to shift the nonlinear range of the activation functions at the individual nodes. Fig. 2 A small feed-forward neural network for the interpolation of a three-dimensional function, as indicated by the three nodes in the input layer. It has two hidden layers containing four and three nodes, respectively, and one node in the output layer providing the energy E. All fitting parameters are shown as arrows. The bias node acts as an adjustable offset to shift the nonlinear range of the activation functions at the individual nodes.

See other pages where Neural networks energy function is mentioned: [Pg.124]    [Pg.911]    [Pg.205]    [Pg.39]    [Pg.198]    [Pg.426]    [Pg.2]    [Pg.911]    [Pg.107]    [Pg.267]    [Pg.251]    [Pg.443]    [Pg.137]    [Pg.29]    [Pg.195]    [Pg.58]    [Pg.324]    [Pg.244]    [Pg.245]    [Pg.248]    [Pg.265]    [Pg.267]    [Pg.87]    [Pg.84]    [Pg.45]    [Pg.18]    [Pg.396]    [Pg.1074]    [Pg.11]    [Pg.13]    [Pg.34]    [Pg.347]    [Pg.336]    [Pg.34]    [Pg.407]    [Pg.634]    [Pg.421]   
See also in sourсe #XX -- [ Pg.520 ]




SEARCH



Network energy

Network functionality

Neural network

Neural networking

© 2024 chempedia.info