Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural network training

The profits from using this approach are dear. Any neural network applied as a mapping device between independent variables and responses requires more computational time and resources than PCR or PLS. Therefore, an increase in the dimensionality of the input (characteristic) vector results in a significant increase in computation time. As our observations have shown, the same is not the case with PLS. Therefore, SVD as a data transformation technique enables one to apply as many molecular descriptors as are at one s disposal, but finally to use latent variables as an input vector of much lower dimensionality for training neural networks. Again, SVD concentrates most of the relevant information (very often about 95 %) in a few initial columns of die scores matrix. [Pg.217]

Even so, artificial neural networks exhibit many brainlike characteristics. For example, during training, neural networks may construct an internal mapping/ model of an external system. Thus, they are assumed to make sense of the problems that they are presented. As with any construction of a robust internal model, the external system presented to the network must contain meaningful information. In general the following anthropomorphic perspectives can be maintained while preparing the data ... [Pg.8]

Especially the last few years, the number of applications of neural networks has grown exponentially. One reason for this is undoubtedly the fact that neural networks outperform in many applications the traditional (linear) techniques. The large number of samples that are needed to train neural networks remains certainly a serious bottleneck. The validation of the results is a further issue for concern. [Pg.680]

Leonard, J.. and Kramer, M. A., Improvement of the back propagation algorithm for training neural networks, Comput. Chem. Eng. 14, 337-341 (1990). [Pg.100]

Petersen, S.B., Bohr, H., BohrJ., Brunak, S., Cotterill, R.M.J., Fredholm, H. et al. (1990) Training Neural Networks to analyze biological sequences. Trends in Biotechnology,... [Pg.309]

M. S. Sanchez and L. A. Sarabia, A stochastic trained neural network for nonparametric hypothesis testing, Chemom. Intell. Lab. Syst., 63(2), 2002, 169-187. [Pg.277]

A number of researchers have tried training neural networks to achieve color constancy. A neural network basically consists of a set of nodes connected by weights (McClelland and Rumelhart 1986 Rumelhart and McClelland 1986 Zell 1994). Artificial neural networks are an abstraction from biological neural networks. Figure 8.2 shows a motor neuron in (a) and a network of eight artificial neurons on the right. A neuron may be in one of... [Pg.194]

Table 7.4 Confusion Matrix from an Ensemble of 100 Trained Neural Networks Predicting Gene Family Target Activity... Table 7.4 Confusion Matrix from an Ensemble of 100 Trained Neural Networks Predicting Gene Family Target Activity...
Once they are trained, neural networks and decision trees are very fast filter tools in virtual screening approaches. They are therefore applied early in the virtual screening filter cascade. [Pg.248]

NeuroSolutions NeuroSolutions (http //www.nd.com/) is a powerful commercial neural network modeling software that provides an icon-based graphical user interface and intuitive wizards to enable users to build and train neural networks easily [78], It has a large selection of neural network architectures, which includes FFBPNN, GRNN, PNN, and SVM. A genetic algorithm is also provided to automatically optimize the settings of the neural networks. [Pg.228]

Johansson, U., Konig, R. and Niklasson, L. (2003) Rule extraction from trained neural networks using genetic programming. 13th International Conference on Artificial Neural Networks, Istanbul, Turkey, supplementary proceedings, pp. 13-16. [Pg.407]

Consequently, the optimization in Eq. 3.79 is convex and has a unique solution that can be found efficiently, ruling out the problem of local minima encountered in training neural networks [42]. [Pg.68]

This is a simple example of how investigation of trained neural networks leads to new conclusions about chemical structures. In this case, it is an obvious conclusion since it is easy to find the reason for the assignment. However, in several cases the reason is not obvious. Due to the logical mathematical framework, the assignment performed by neural networks is always correct. If a result is unexpected, the defect is in the predefined classification, or the descriptor does not properly represent the task. [Pg.192]

A commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network s output, /(x), and the target value y over all the example parrs. When one tries to minimize this cost using gradient descent for the class of neural networks called multilayer perceptrons, one obtains the common and well-known back-propagation algorithm for training neural networks. [Pg.916]

Training a neural network model essentially means selecting one model from the set of allowed models (or, in a Bayesian framework, determining a distribution over the set of allowed models) that minimizes the cost criterion. There are numerous algorithms available for training neural network models most of them can be viewed as a straightforward application of optimization theory and statistical estimation. Recent developments in this field use particle swarm optimization and other swarm intelligence techniques. [Pg.917]

Evolutionary methods, simulated annealing, expectation-maximization, and non-parametric methods are some commonly used methods for training neural networks. [Pg.917]


See other pages where Neural network training is mentioned: [Pg.2]    [Pg.689]    [Pg.535]    [Pg.69]    [Pg.169]    [Pg.169]    [Pg.60]    [Pg.63]    [Pg.148]    [Pg.153]    [Pg.190]    [Pg.520]    [Pg.4016]    [Pg.118]    [Pg.483]    [Pg.168]    [Pg.509]    [Pg.294]    [Pg.1779]    [Pg.1789]    [Pg.153]    [Pg.357]    [Pg.28]    [Pg.338]    [Pg.509]    [Pg.1779]    [Pg.1780]    [Pg.1207]    [Pg.259]   
See also in sourсe #XX -- [ Pg.15 , Pg.21 , Pg.22 , Pg.23 , Pg.30 , Pg.31 , Pg.32 , Pg.33 , Pg.34 , Pg.35 , Pg.36 , Pg.37 , Pg.38 , Pg.39 , Pg.71 ]




SEARCH



Artificial neural networks based models training

Artificial neural networks training

Identification neural network training

Neural network

Neural network algorithm training process

Neural network training data

Neural network training processes

Neural networking

Software for Training Neural Networks

The Training of Artificial Neural Networks

Training Kohonen neural networks

Training a neural network

Training an Artificial Neural Network

Training counterpropagation neural network

Training network

Training, of neural networks

© 2024 chempedia.info