Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Neural networks architecture

Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),... Figure 9-16. ArtiFicial neural network architecture with a two-layer design, compri ing Input units, a so-called hidden layer, and an output layer. The squares Inclosing the ones depict the bias which Is an extra weight (see Ref, [10 for further details),...
The specific volumes of all the nine siloxanes were predicted as a function of temperature and the number of monofunctional units, M, and difunctional units, D. A simple 3-4-1 neural network architecture with just one hidden layer was used. The three input nodes were for the number of M groups, the number of D groups, and the temperature. The hidden layer had four neurons. The predicted variable was the specific volumes of the silox-... [Pg.11]

Viscosities of the siloxanes were predicted over a temperature range of 298-348 K. The semi-log plot of viscosity as a function of temperature was linear for the ring compounds. However, for the chain compounds, the viscosity increased rapidly with an increase in the chain length of the molecule. A simple 2-4-1 neural network architecture was used for the viscosity predictions. The molecular configuration was not considered here because of the direct positive effect of addition of both M and D groups on viscosity. The two input variables, therefore, were the siloxane type and the temperature level. Only one hidden layer with four nodes was used. The predicted variable was the viscosity of the siloxane. [Pg.12]

A very simple 2-4-1 neural network architecture with two input nodes, one hidden layer with four nodes, and one output node was used in each case. The two input variables were the number of methylene groups and the temperature. Although neural networks have the ability to learn all the differences, differentials, and other calculated inputs directly from the raw data, the training time for the network can be reduced considerably if these values are provided as inputs. The predicted variable was the density of the ester. The neural network model was trained for discrete numbers of methylene groups over the entire temperature range of 300-500 K. The... [Pg.15]

Step 8. Spectra classified using an artificial neural network pattern recognition program. (This program is enabled on a parallel-distributed network of several personal computers [PCs] that facilitates optimization of neural network architecture). [Pg.94]

As a chemometric quantitative modeling technique, ANN stands far apart from all of the regression methods mentioned previously, for several reasons. First of all, the model structure cannot be easily shown using a simple mathematical expression, but rather requires a map of the network architecture. A simplified example of a feed-forward neural network architecture is shown in Figure 8.17. Such a network structure basically consists of three layers, each of which represent a set of data values and possibly data processing instructions. The input layer contains the inputs to the model (11-14). [Pg.264]

Dayhoff, J. (1990) Neural Network Architecture van Nostrand Reinhold, New York... [Pg.31]

Figure 12.1. Multi-layered Feed Forward Neural Network Architecture... Figure 12.1. Multi-layered Feed Forward Neural Network Architecture...
Fausett, L. (1994). Fundamentals Of Neural Networks Architectures, Algorithms And Applications. Prentice Hall, Englewood Cliffs. [Pg.27]

There are literally dozens of kinds of neural network architectures in use. A simple taxonomy divides them into two types based on learning algorithms (supervised, unsupervised) and into subtypes based upon whether they are feed-forward or feedback type networks. In this chapter, two other commonly used architectures, radial basis functions and Kohonen self-organizing architectures, will be discussed. Additionally, variants of multilayer perceptrons that have enhanced statistical properties will be presented. [Pg.41]

The fundamental idea behind training, for all neural network architectures, is this pick a set of weights (often randomly), apply the inputs to the network and see how the network performs with this set of weights. If it doesn t perform well, then modify the weights by some algorithm (specific to each architecture) and repeat the procedure. This iterative process is continued until some pre-specified stopping criterion has been reached. [Pg.51]

Neural network architectures 2L/FF = two-layer, feed forward network (i.e., perceptron) 3L or 4L/FF = three- or four-layer, feed-forward network (i.e., multi-layer perceptron). [Pg.104]

Neural network architectures and learning algorithms (see Table 9.1) GA = Genetic Algorithm SCG = Scaled Conjugate Gradient... [Pg.115]

The neural network architecture optimized by the evolutionary algorithm could be analyzed for a biochemical interpretation and feature extraction. One may infer the importance of input properties based on the relative connectivity of the input units. For example, bulkiness, which was not connected at all, was probably unimportant. On the other hand, units for polarity, refractivity, hydrophobicity, and surface area were highly connected, indicating these are important features of membrane transition regions. [Pg.135]

Neural networks when applied to in vitro-in vivo correlations have the potential to be a useful predictive tool. Dowell et al. from the FDA and the Elan Corporation tested this hypothesis using a number of neural network architectures, on a data set including both in vitro inputs (% dissolved) and in vivo outputs (plasma concentrations). They concluded that the approach is viable—a conclusion also supported by studies by Chen et al. using a controlled release tablet formulation based on mixtures of hydrophilic polymers. [Pg.2409]

Owing to the nature of neural network architecture, there are limitations to the number of descriptors that can be introduced to model an activity. Although the GA from the previous analysis vastly reduced the number of X variables to produce an improved PLS model, it is still too high a number to use as inputs here. To this end, PCA was used to assess the intrinsic dimensionality of those X variables selected by the GA described under the previous header. We generated only one model, which incorporated all non-QCT descriptors and the QCT descriptors at level C as this would provide the most promising results. Eight PCs were extracted as linear combinations of descriptors that explained 90% of the variance in the GA output and used as variables for each molecule. This is illustrated in Figure 4. [Pg.308]

Figure 2. Schematic of the neural network architecture used in this study. Figure 2. Schematic of the neural network architecture used in this study.
NeuroSolutions NeuroSolutions (http //www.nd.com/) is a powerful commercial neural network modeling software that provides an icon-based graphical user interface and intuitive wizards to enable users to build and train neural networks easily [78], It has a large selection of neural network architectures, which includes FFBPNN, GRNN, PNN, and SVM. A genetic algorithm is also provided to automatically optimize the settings of the neural networks. [Pg.228]

These techniques span the entire field from multiple linear regression (MLR)-type methods and various forms of neural network architectures to rule-based techniques of different kinds. These approaches also span from single models to multiple models, that is, consensus or ensemble modeling. Terms like machine learning and data or information fusion are also frequently encountered in this area of research, as well as the concepts of applicability domain and validation. [Pg.388]

J Leonard, MA Kramer, and LH Ungar. A neural network architecture that computes its own reliability. Comput. Chem. Engg., 16(9) 819-835, 1992. [Pg.289]


See other pages where Neural networks architecture is mentioned: [Pg.454]    [Pg.3]    [Pg.4]    [Pg.21]    [Pg.22]    [Pg.27]    [Pg.746]    [Pg.115]    [Pg.116]    [Pg.251]    [Pg.543]    [Pg.135]    [Pg.312]    [Pg.194]    [Pg.159]    [Pg.366]    [Pg.366]    [Pg.8]    [Pg.38]    [Pg.143]    [Pg.146]    [Pg.217]    [Pg.284]    [Pg.286]    [Pg.235]    [Pg.269]   
See also in sourсe #XX -- [ Pg.3 ]

See also in sourсe #XX -- [ Pg.80 , Pg.83 , Pg.135 ]




SEARCH



Artificial neural networks architecture

Network architecture

Neural architecture

Neural network

Neural networking

© 2024 chempedia.info