Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Supervised learning artificial neural networks

In the early days of catalyst screening, speed was the only important matter. This meant collecting as much information as possible on a certain catalyst under defined process parameters. This approach produces a large number of non-interrelated single data points with a low degree of information. As soon as correlations between these data can be found, the information density increases. This is the case if reaction kinetics are derived from single data points or if a supervised artificial neural network has learned to predict relations between data points. [Pg.411]

More complex approaches to this problem involve the use of artificial neural networks [22], Bayesian networks [23] and support vector machines [24], which in turn are based on the same principle of supervised learning [25]. [Pg.556]

Artificial neural networks A machine or program for supervised or unsupervised learning based on a layered network of neurons. Normally, a network is trained to best describe a biological or chemical system, in order to classify new systems. Used for pattern recognition in cheminformatics, QSAR, and bioinformatics. [Pg.748]

Abstract. Artificial neural networks (ANN) are useful components in today s data analysis toolbox. They were initially inspired by the brain but are today accepted to be quite different from it. ANN typically lack scalability and mostly rely on supervised learning, both of which are biologically implausible features. Here we describe and evaluate a novel cortex-inspired hybrid algorithm. It is found to perform on par with a Support Vector Machine (SVM) in classification of activation patterns from the rat olfactory bulb. On-line unsupervised learning is shown to provide significant tolerance to sensor drift, an important property of algorithms used to analyze chemo-sensor data. Scalability of the approach is illustrated on the MNIST dataset of handwritten digits. [Pg.34]

Support vector machine (SVM) is originally a binary supervised classification algorithm, introduced by Vapnik and his co-workers [13, 32], based on statistical learning theory. Instead of traditional empirical risk minimization (ERM), as performed by artificial neural network, SVM algorithm is based on the structural risk minimization (SRM) principle. In its simplest form, linear SVM for a two class problem finds an optimal hyperplane that maximizes the separation between the two classes. The optimal separating hyperplane can be obtained by solving the following quadratic optimization problem ... [Pg.145]

A multilayer perceptron (MLP) is a feed-forward artificial neural network model that maps sets of input data onto a set of suitable outputs (Patterson 1998). A MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function. MLP employs a supervised learning techruque called backpropagation for training the network. MLP is a modification of the standard linear perceptron and can differentiate data that are not linearly separable. [Pg.425]

Self-Organizing Maps (SOMs) or Kohonen maps are types of Artificial Neural Networks (ANNs) that are trained using supervised/unsupervised learning to produce a low-dimensional discretized representation (typically 2-dimensional) of an arbitrary dimension of input space of the training samples (Zhong et al. 2005). [Pg.896]

Artificial neural networks (ANNs) are computer-based systems that can learn from previously classified known examples and can perform generalized recognition - that is, identification - of previously unseen patterns. Multilayer perceptions (MLPs) are supervised neural networks and, as such, can be trained to model the mapping of input data (e.g. in this study, morphological character states of individual specimens) to known, previously defined classes. [Pg.208]

Figure 2 illustrates a so-called controlled learning paradigm where the network is supervised so as to work in a one-step-ahead method in the model predicting the object The feedback can come from two sources, depending on the position of switch FFN-RecN. In the FFN position, the artificial neural network is combined into the structure of feed forward network, while in the RecN position, the network structure corresponds to the recurrent network. [Pg.572]


See other pages where Supervised learning artificial neural networks is mentioned: [Pg.3142]    [Pg.652]    [Pg.199]    [Pg.47]    [Pg.178]    [Pg.259]    [Pg.323]    [Pg.123]    [Pg.24]    [Pg.205]    [Pg.208]    [Pg.34]    [Pg.22]    [Pg.57]    [Pg.41]    [Pg.42]    [Pg.52]    [Pg.2896]    [Pg.4550]    [Pg.192]    [Pg.121]    [Pg.128]    [Pg.88]    [Pg.111]    [Pg.360]    [Pg.437]    [Pg.89]   
See also in sourсe #XX -- [ Pg.135 ]




SEARCH



Artificial Neural Network

Artificial network

Learning neural network

Neural artificial

Neural network

Neural networking

Supervised

Supervised learning

Supervised neural network

© 2024 chempedia.info