Big Chemical Encyclopedia

Chemical substances, components, reactions, process design ...

Articles Figures Tables About

Other Neural Network Paradigms

Feedforward CNNs trained with the on-line backpropagation algorithm or one of its many variants is the most common method used to solve or treat scientific and technological problems. Although no other CNN paradigms have demon- [Pg.28]

The radial basis function (RBF) network is a two layer network whose output nodes form a linear combination of non-linear basis functions computed by the hidden layer nodes [137-143]. The basis functions in the hidden layer produce a significant nonzero response only when the input falls within a small localized region of the input space (receptive field). In general, the hidden layer nodes use Gaussian response functions, with the position (w) and width (a) used as variables  [Pg.29]

Radial basis functions networks are good function approximation and classification as backpropagation networks but require much less time to train and don t have as critical local minima or connection weight freezing (sometimes called network paralysis) problems. Radial basis fimction CNNs are also known to be universal approximators and provide a convenient measure of the reliability and confidence of its output (based on the density of training data). In addition, the functional equivalence of these networks with fuzzy inference systems have shown that the membership functions within a rule are equivalent to Gaussian functions with the same variance (o ) and the munber of receptive field nodes is equivalent to the number of fuzzy if-then rules. [Pg.29]

Based on the neural network model of the GAP process, both forward and reverse predictions of conditions for maximizing the weight fraction of the pow- [Pg.30]

Inputs were XI (melt temperature in°C),X2 (melt stream size in inches),X3 (type of material) outputs were weight fractions having micrometer ranges as specified by Y1 (0-53), Y2 (53-106), Y3 (106-150), Y4 (150-295), and Y5 (295-600) [Pg.31]


Literature in the area of neural networks has been expanding at an enormous rate with the development of new and efficient algorithms. Neural networks have been shown to have enormous processing capability and the authors have implemented many hybrid approaches based on this technique. The authors have implemented an ANN based approach in several areas of polymer science, and the overall results obtained have been very encouraging. Case studies and the algorithms presented in this chapter were very simple to implement. With the current expansion rate of new approaches in neural networks, the readers may find other paradigms that may provide new opportunities in their area of interest. [Pg.31]

When an accurate model of the reaction kinetics is not available (e.g., due to the lack of reliable data for identification), the previously developed approach may be ineffective and model-free strategies for the estimation of the effect of the heat released by the reaction, aq, must be adopted. In detail, the approach in [27] can be considered, where aq is estimated, together with the heat transfer coefficient, via a suitably designed nonlinear observer [24], Other model-free approaches can be adopted, e.g., those based on the adoption of universal interpolators (neural networks, polynomials) for the direct online estimation of the heat [16] and purely neural approaches [11], Approaches based on the combination of neural and model-based paradigms [2] or on tendency models [25] can be considered as well. [Pg.102]

Finally, Chapter 6 goes into two new regression paradigms artificial neural networks and support vector machines. Quite different from the other regression methods presented in the book, they are gaining acceptance because they can handle non-linear systems and/or noisy data. This step forward is introduced briefly and, once more, a review is presented with practical applications in the atomic spectroscopy field. Not surprisingly, most papers deal with complex measurements [e.g. non-linear calibration or... [Pg.8]

The above formulation was adapted from Hornik (1991). However, also other results concerning approximation by means of feed-forward neural networks (Kurkovd, 1992 Hornik et al, 1994 Pinkus, 1998 Kurkova, 2002 Kainen et al., 2007) rely on essentially the same paradigm the required number of hidden neurons h is unknown the only guarantee is that some h always exists such that a multilayer perceptron with h hidden neurons can compute a function F with desired properties. The actual value of h depends on the approximated dependence D and on the aspects discussed in the previous points — the function space considered and the required degree of closeness between F and D however, the fact that such an h exists is independent of D and of those aspects. [Pg.94]


See other pages where Other Neural Network Paradigms is mentioned: [Pg.28]    [Pg.28]    [Pg.329]    [Pg.332]    [Pg.325]    [Pg.65]    [Pg.254]    [Pg.135]    [Pg.1356]    [Pg.52]    [Pg.3273]    [Pg.324]    [Pg.390]    [Pg.346]    [Pg.352]    [Pg.151]    [Pg.316]    [Pg.181]    [Pg.104]   


SEARCH



Neural network

Neural networking

© 2024 chempedia.info